ietf
[Top] [All Lists]

Re: [narten(_at_)us(_dot_)ibm(_dot_)com: PI addressing in IPv6 advances in ARIN]

2006-04-21 06:48:37
Tony Hain wrote:
Brian E Carpenter wrote:

... Scott Leibrand wrote:
..
> I agree, especially in the near term.  Aggregation is not required
right
> now, but having the *ability* to aggregate later on is a prudent risk
> reduction strategy if today's cost to do so is minimal (as I think it
is).

I think that's an understatement until we find an alternative to
BGP aggregation. That's why my challenge to Iljistsch was to simulate
10B nodes and 100M sites - if we can't converge a reasonable sized
table for that network, we *know* we have a big problem in our
future. Not a risk - a certainty.



The problem with your challenge is the lack of a defined topology. The
reality is that there is no consistency for topology considerations, so the
ability to construct a routing model is limited at best.

Actually my challenge asked for an assumed geographical distribution
and an assumed set of ISPs and interconnects, I believe. Obviously
one needs to know how robust the result is under reasonable variations
of the assumed topology.


The other point is that the protocol is irrelevant. Whatever we do the
architectural problem is finding an aggregation strategy that fits a routing
system in hardware that we know how to build, at a price point that is
economically deployable.

Yes, but BGP4 is a surrogate for that.

As far as I am concerned BGP is not the limitation. The problem is the ego
driven myth of a single dfz where all of the gory details have to be exposed
globally. If we abolish that myth and look at the problem we are left with
an answer where BGP passing regional aggregates is sufficient.

I'm sorry, I don't think I've ever seen a convincing argument how such
aggregates could come to pass in the real world where inter-regional
bandwidth is partitioned at link level or dark fibre level. There just
isn'tany forcing function by which mathematically close prefixes
will become topologically close, because there's nothing that
forces multiple providers to share long-distance routes.

Yes there
will be exception routes that individual ISPs carry, but that is their
choice not a protocol requirement. Complaining that regional aggregates are
sub-optimal is crying wolf when they know they will eventually loose to the
money holding customer demanding non-PA space. The outcries about doom and
gloom with PI are really about random assignments which would be even less
optimal.
The fundamental question needs to be if there is an approach to address
allocation that can be made to scale under -any- known business model, not
just the one in current practice. It is not the IETFs job to define business
models, rather to define the technology approaches that might be used and
see if the market picks up on them. Unfortunately over the last few years
the process has evolved to excluding discussions that don't fit in the
current business models, despite the continuing arguments about how those
models are financial failures and need to change. The point that Scott was
making is that there are proposals for non-random assignments which could be
carried as explicit's now and aggregated later.

I understand his point very well and I'm even in favour of it, because
statistically it can only help aggregation and certainly not damage it.
But I think that without some radically new principle it will at best
very slightly reduce randomness.

    Brian

What we lack is a forum to
evaluate the trade-off's. Tony


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>