ietf
[Top] [All Lists]

Re: chicago IETF IPv6 connectivity

2007-07-18 06:22:17
On 17-jul-2007, at 5:48, Keith Moore wrote:

what is the problem you have with "multiple-addresses-per-host" problem?
        i never had any problem even with IPv4.

it's the fact that in IPv6 if you don't select the best source and
destination address to talk to a node, you might either fail to get a
connection (and have to wait a long time for a timeout), or you might
get a connection that performs much worse than some other
source/destination address pair you could have used. this was true for IPv4 nodes with multiple interfaces also, but this was mostly solved in
IPv4 by not having multiple active network interfaces.  (people tried
it, it didn't work well, so they stopped doing it.) In IPv6 the normal
situation for hosts on multihomed networks is to have multiple
addresses, and there are several pressures to have additional addresses
- like for mobility, or using ULIAs for local apps on enterprise
networks so that they won't be affected by renumbering.  so address
selection becomes more important in IPv6 than it was in IPv4.

Moving topology decisions out of the hands of routing protocols and the people that build networks into hosts creates a lot of complications. BGP as we know it is pretty bad at selecting the best path, but it's fairly good at avoiding the bad ones, so it works well enough in practice. When you give a host a number of paths (as in source/dest address combinations) to choose from, it's doable (but not trivial) to avoid the non-working paths, but knowing that path X has a much higher capacity than path Y even though path Y does work is very hard, especially if we want to look further than the only easily measurable quality aspect: the RTT.

But the entire existence of this problem stems from the notion that the entire communication must flow over a single path. If we let packets belonging to a single session flow over multiple paths, and then use standard windowing techniques such that paths that can accept more data get more data and paths that don't don't, we actually get to use the full aggregate capacity of the entire set of paths, and it's no longer necessary to invest complexity in discovery functions.

BitTorrent already works like this to some degree by exchanging information over a large number of paths concurrently. I've also once been involved in implementing a system where we did this "by hand" by using several different TCP sessions. This didn't work at first because the program would round robin and block writing to the slow sessions so 10 + 100 Mbps = 20 Mbps but after fixing this 10 + 100 Mbps = 110 Mbps.

Making this work well for applications that are interested in other quality aspects than aggregate throughput would be more difficult, though, but probably not as difficult as deducing QoS information from random sets of source/dest addresses.

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>