ietf
[Top] [All Lists]

Re: A follow up question

2003-04-23 21:25:59

      * We don't have a routing architecture that is
      independent of the structure of IP addresses.  I wish we
      did.  I think the fact that we don't is going to lead us
      into [other] very serious problems in the long term.
      But the fact remains that we don't have anything
      resembling an IP-address-independent routing fabric
      proposal on the table, certainly not one that is
      complete enough to be plausible.
      
      * We are seeing more and more requirement for
      multihoming of one type or another, whether you count
      community wireless schemes or other small-group or
      small-ISP models into the mix or not.

One implication of the first problem is that one can't solve the 
second one by giving everyone who wants to connect to more than 
one provider independently-routable PI space.  Now, if we don't 
come up with a solution to both of those problems, it doesn't 
make any difference what features are in IPv6, because IPv6 will 
fail, and it will fail because the Internet --at least the 
Internet as we know it-- will fail.  I think we even know what 
the alternative looks like, at least in general terms, with 
islands of walled gardens, each with independent address spaces, 
and connected by bilateral agreements being the most likely of 
those alternatives.  Those proposals have been around for years, 
we know what they look like, and they don't provide the 
edge-oriented, end-to-end, "dumb" network we all believe is the 
core strength of the Internet.

I don't doubt this, but it's fairly clear to me that having multiple
addresses per host and expecting hosts to make decisions about which
prefix to use is also highly undesirable - in the sense that it will
drastically limit the set of applications that can work well.  But yes,
we have more of a chance of things working if they're all using a
single shared address space, than we do with disjoint address spaces.

I also don't see how this problem is any different between IPv4 and
IPv6, except that we might eventually expect IPv6 to scale to more
prefixes than IPv4.  Yes, v6 hosts have the code to handle multiple
prefixes and addresses, but that doesn't mean that the apps can deal
reasonably with them.  And in my experience even the simplest apps fail
to deal reasonably with multiple prefixes.  (trial-and-error, waiting
tens of seconds to timeout before proceeding to the next address, is
not reasonable)

But we need to distinguish between what we can do in the short term and
what we need to do in the long term.  In the short term our choices are
either to allow some form of provider-independent addressing (with all
that this means for the scalability of routing) or to expect hosts and
apps to deal with multiple prefixes.  Which choice is better is not
obvious - it depends on whether you want to optimize IPv6 for easy
deployment or eventual scalability.  Your answer to this question
probably depends on what layer you live at.

In the longer term we need to develop a way of doing a mapping from
location names or endpoint names to paths through the network, one that
preserves the uniqueness and structure of IPv6 addresses.

At least so far, "give every host one address and let the 
network (the routers?) sort out the routing" seems to me to 
reproduce the current IPv4 situation.  That doesn't seem to me 
to be a solution because it:

      * Assumes that everyone who has a LAN that they want to
      connect to multiple providers will have routable PI
      space.  Or
      
      * Assumes that everyone who has a LAN that they want to
      connect to multiple providers will be able to do so by
      punching holes in CIDR blocks.

To my routing-naive mind, two mechanisms seem worth investigating:

- One is a mapping at the BGP layer from what I'll call path-independent
prefixes to path-specific prefixes.   e.g. BGP could propagate
information of the form "route prefix X as if it were prefix Y". 
Routers wouldn't have to compute routes for prefix X, they could just
compute routes for Y and then use the same entry for X as for Y in the
fowarding table.  This lets the number of prefixes grow to some degree
without directly increasing the complexity of routing computations, as
long as the set of distinct destination networks as viewed from the core
did not get larger. In effect it would allow non-adjacent prefixes to be
aggregated for the purpose of route table computations. Which is not to
say that this comes for free, but it would let us to some degree move
toward a routing structure for the core that was independent of the IP
addresses used by hosts - in effect hosts and routing would use
different portions of the IP space, though some amount of overlap could
be tolerated.

- A mobileIP-like scheme for border routers, that could issue redirects
for prefixes as well as individual IP addresses.  So packets for a
particular destination prefix would initially go to "home network
agents" that could live anywhere on the network (including a database
distributed across multiple exchange points in diverse locations, all of
which advertise reachability via BGP) but which (in addition to
forwarding initial packets) would return redirects for entire prefixes
that caused subsequent packets to be sent to the "in care of prefixes".
These redirects would be intercepted and acted on by routers near the
source, not (or not only) by the sending host. Yes, it would still
require tunneling, or MPLS, or some other way of telling the network 
"route this traffic to here rather than to the destination  address in
the IP packet; that's for use after the packet exits our network".  But
there are lots of ways to do this, and perhaps they'll become more
widely available over time. 

And for those who think these ideas are naive, keep in mind that I did
say they're not near-term solutions, and that in the near term we're
stuck with less sophisticated mechanisms.  But we need to insist on
global addressing now so that we have a way to move to better routing
systems once we do solve the problems.

I don't see why it's unrealistic to have a global address
space that encompasses all hosts that are connected any
network that is connected to the Internet, and to expect
applications to use that global address space.  I agree that
we cannot expect complete or near-complete connectivity.

But, if we have a one-address-per-host global address space -- 
which I think is what you are asking for-- and don't have 
near-complete connectivity, then we are either going to see 
rising times to set up the typical TCP connection, and a lot of 
UDP attempts fall off the edge, or we are going to need a lot 
more PI space.  I don't see realistic alternatives for the 
reasons discussed above.  Do you?  And, if so, what are they?

I doubt we'll see a one-size-fits-all kind of solution any time soon. 
Some sites will be able to make do with multiple advertised prefixes,
perhaps with clever DNS tricks to try to minimize the connection
failures (yeech).  Other sites will be able to tolerate a single
provider.  Others will demand PI space and perhaps get it. Maybe they'll
have to buy an ISP, or masquerade as one, to accomplish this. I've
always thought the distinction between an ISP and a multi-homed,
multi-sited customer was fairly arbitrary.    I could imagine an
arrangement where limited-time leases for use of some amount of what was
effectively PI space (even if it came out of providers' allocations, it
could be routed to multiple providers)  was auctioned off to the highest
bidders, like radio spectrum. Or maybe the problems will create a demand
for more stable/clueful/expensive IP routing service.  

Or maybe the public Internet will just degrade to the point that it's
not useful for very much.  I'm not discounting that possibility, but
neither do I want to dwell on it.

In my mind, the reason we need feedback from the network to
applications about when applications try to violate
administrative prohibitions on use of the network is not so
that applications can try to route the messages through other
paths (though it does enable that to some limited degree) but
so that applications can provide accurate indications to their
users as to why they're failing.

Keith, while I'm very much in favor of accurate feedback, 
messages with the ultimately semantics of "you lose" have a long 
and well-documented history of being unpopular with users. 

Granted.  But there is no way that the reliability of internet
applications can improve unless the network can provide such feedback
to *somebody*.  Now maybe it shouldn't always be the user, but the
application, or the user's network administrator, or the application
author/vendor.  to some degree it depends on the application.  But
nobody can do anything about the problem without some indication that
the problem exists and some way to distinguish one kind of problem
from another.

I'd rather focus on getting the network to work better and more often,
while reporting  accurately on failures if there is no alternative but
to fail.

No question there, but if the failure is due to an administrative
prohibition (which is where I thought this started), it's hard
to see what it means to have the network work better and more often -
unless it's to discourage use of packet filtering.  :)

I don't see why TCP and/or UDP stacks can't provide such
interfaces to applications, even though of course this means
that there will need to be other interfaces (invisible to
applications) between TCP and IP and UDP and IP to pass that
information upstream.

They can't provide it because we don't have a model, or set of 
abstractions, for providing it.  If it is important, maybe we 
had better get started.

Has anyone made a list of important core problems that need to be
worked on?

Keith



<Prev in Thread] Current Thread [Next in Thread>