ietf
[Top] [All Lists]

Re: Death of the Internet - details at 11

2004-01-28 15:51:50
On 28-jan-04, at 18:39, John C Klensin wrote:

The reality is that there is very little that we do on the Internet today that require connection persistence when a link goes bad (or when "using more than one IP address"). If a connection goes down, email retries, file transfer connections are reconnected and the file (or the balance of the file if checkpointing is in use) is transferred again, URLs are refreshed, telnet and tunnel connections are recreated over other paths, and so on. It might be claimed that our applications, and our human work habits, are designed to work at least moderately well when running over a TCP that is vunerable to dropped physical connections.

This assumes that when address A fails, address B keeps working. This is only true when routing is symmetric or the multiaddressed endpoint is able to detect the failure. And applications need to retry with other addresses. They typically don't do this very well if at all in IPv4, and only moderately well in IPv6.

Would it be good to have a TCP, or TCP-equivalent, that did not have that vunerability, i.e., "could preserve a connection when using more than one address"? Sure, if the cost was not too high on normal operations and we could actually get it.

There are several proof of concept multiaddress TCPs.

[sorry for the long quote:]

By contrast, the problem that I find of greatest concern is the one in which, if I'm communicating with you, and one or the other of us has multiple connections available, and the connection path between us (using one address each) disappears or goes bad, we can efficiently switch to a different combination... even if all open TCP connections drop and have to be reestablished in the interim. For _that_ problem, we had a reasonably effective IPv4 solution (at least for those who could afford it) for many years -- all one needed was multiple interfaces on the relevant equipment (the hosts early on and the router later) with, of course, a different connection and address on each interface. But, when we imposed CIDR, and the address-allocation restrictions that went with it, it became impossible for someone to get the PI space that is required to operate a LAN behind such an arrangement (at least without having a NAT associated with the relevant router) unless one was running a _very_ large network.

??? Why would you need PI space to be able to give hosts more than one address and use those successfully?

And if you had the PI space, why would you bother? Contrary to some reports multihoming using independent address space and links to more than one ISP works fairly well: failover times are almost always shorter than TCP or user timeouts.

        (i) if any of the options turn out to require an
        approach similar to the one that continue to work for
        big enterprises with PI space in IPv4, then we are going
        to need (lots) more address space.  And

More than what?

However quite a number of the proposals do not
require any significant infrastructure change.  This bodes
well for rapid deployment, once they make it through the
standards process.

On the other hand, getting the IETF to produce standards track
specifications out of this large pack of candidates could take
another 10 years...

Looking at the rate at which the IETF is coming up with ways to automatically determine IPv6 DNS resolver addresses I can hardly disagree.

But most of the multi6 proposals have large parts in common with other proposals. It seems to me that all we have to do is combine the best parts. How hard can that be. (Famous last words.)

Yes. And it may speak to the IETF's sense of priorities that the efforts to which you refer are predominantly going into the much more complex and long-term problem, rather than the one that is presumably easier to solve and higher leverage.

Which would be?