ietf
[Top] [All Lists]

Re: [narten(_at_)us(_dot_)ibm(_dot_)com: PI addressing in IPv6 advances in ARIN]

2006-04-21 07:52:01
On 21-apr-2006, at 15:47, Brian E Carpenter wrote:

If we abolish that myth and look at the problem we are left with
an answer where BGP passing regional aggregates is sufficient.

I'm sorry, I don't think I've ever seen a convincing argument how such
aggregates could come to pass in the real world where inter-regional
bandwidth is partitioned at link level or dark fibre level. There just
isn'tany forcing function by which mathematically close prefixes
will become topologically close, because there's nothing that
forces multiple providers to share long-distance routes.

Obviously it would be tremendously helpful if ISP A would handle region X, ISP B region Y and ISP C region Z, so it's just a matter of dumping the traffic for a certain region with the ISP in question but for various reasons this will never work, if only because the whole point is that multihomers have more than one ISP and each of their connections to their ISPs may be down at any point in time.

Let me try out a new analogy. There are many languages in the world, and most international businesses have to be able to work in more than one language. Now wouldn't it suck if in a business that has customers in 25 countries with 20 languages, EVERY office would have to have people that speak EVERY language? Fortunately, although there is no fixed relationship between language and geography, in practice there is enough correlation so that you can have the office in Sweden handle all Swedish speaking customers and the offices in Portugal and Brazil the Portugese speaking customers.

Back to networking: send the packets to the place where all the more specifics are known. If the place where all the more specifics are known is close to the places where those more specifics are used, that works out quite well. If the more specifics are used randomly all over the place then this technique adds detours, which is suboptimal.

The point that Scott was
making is that there are proposals for non-random assignments which could be
carried as explicit's now and aggregated later.

I understand his point very well and I'm even in favour of it, because
statistically it can only help aggregation and certainly not damage it.
But I think that without some radically new principle it will at best
very slightly reduce randomness.

I guess I'll work on my simulations...

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>