ietf
[Top] [All Lists]

RE: A follow up question

2003-04-23 19:59:15


--On Wednesday, 23 April, 2003 12:56 -0700 Tony Hain <alh-ietf(_at_)tndh(_dot_)net> wrote:

John C Klensin wrote:
... Maybe that  to get there
means that we need to revisit, not just ICMP as Daniel
suggests,  but the "no TCPng" decision... I don't know.

Maybe that is the path out.

 But, if we can
figure out a way to clean this up and make it work well, we
could easily end up with a stronger justification for
deploying  IPv6 than anything (at least anything real) we
have had to date.

Depends on where you are in the world, and how much IPv4
address space you are sitting on. I am aware of organizations
in the US today that can't get enough IPv4 address space fast
enough by current ARIN policy to meet a viable business plan
for ramp up rate.

I want to leave most of that discussion for some ARIN-related list, as David Conrad suggested. But, for the record, I was trying to dodge the question of whether address space exhaustion, by itself, was "strong enough" or "sufficient" reason to get IPv6 deployed and, especially, to avoid the rathole-populating discussion about alternatives. Since I am on record as believing that, by some reasonable measures, we have already run out of address space and IPv6 is not yet widely deployed, I would suggest that we have a solid example that it has not [yet?] proven to be sufficient.

... I'm just committed to a network and implementation
model in which they are kept below me in the stack.
Consequently, the only thing I want to need to know about an
IP  address is how to pick it up from one source, carry it
around  opaquely, and ultimately pass it to something else or
use it in  an opaque way in a protocol transaction.

But you just said you wanted to keep it below you in the
stack. You can't have it both ways... You either get to keep
it below you by passing around name objects, or you are taking
on the responsibility of getting it right because you are
passing around topology information.

I strongly suspect that we are using words in ways that results in miscommunication, because I think there is a third alternative. That alternative may fall into the category you describe as "passing around name objects", or maybe it doesn't.
From my perspective --which is definitely the applications end
of the stack looking down-- the problem is all about data abstraction. I'm going to try to switch terminology here in the hope that it will help clarify things. Suppose the application gets a well-constructed handle, and passes that handle "around" --either to other applications or to different interfaces to the stack/network. I don't see the application as dealing in topology information by doing that, even if the handle ultimately identifies information that is deeply topology-sensitive and/or if the process that produces the handle uses topological information, or probes the network to develop such information. The important thing from that perspective is that the application is dealing with the handle as an opaque object, without trying to open it up and evaluate its meaning or semantics (with regard to the network or otherwise).

From that point of view, having an application go to the stack
and say "given these criteria, give me a handle on the interface or address I should use" or "given this DNS name, and these criteria, give me a handle on the address I should use" does not involve the application being topology-aware, nor does it imply the application doing something evil because it picks an address (or virtual interface, or...) without understanding the topology. The handle is opaque and and an abstraction -- as far as the application is concerned, it doesn't have any semantics, regardless of whether lower layers of the stack can derive semantics from it or from whatever they (but not the application) can figure out it is bound to.

If the application is calling on, e.g., TCP, then it might pass some criteria to TCP, or might not, and TCP might either pass enhanced criteria to the handle-generating layer or generate the handle itself. Again, the application doesn't care -- it just needs to deal with an abstraction of the criteria in application terms. I think that, from the application standpoint, it makes little difference whether the criteria involve routing, speed, reliability, or any of several potential QoS or security issues.

I'll also be the first to admit that we have handled the set of issues this implies rather badly with IPv4 (and they certainly are not new issues). We have gotten away with it because the number of _hosts_ that need to understand, and choose between, multiple addresses for themselves has been extremely small, at least since routers were introduced into the network architecture. Because it could mostly be seen as a router problem, applications in IPv4 mostly punted (for better or worse). Also, since CIDR went in, we basically haven't given a damn when users who can't qualify for PI space can't multihome (at least without resort to complex kludges). IPv6, with its multiple-address-per-host architecture, turns the problem from "mostly theoretical" to an acute one. It does so with or without SL addresses being one of those on a host that has public (global or otherwise) addresses as well.

Finally, from the standpoint of that hypothetical application, the syntax and construction of that opaque handle are irrelevant. In particular, it makes zero difference if it is a name-string that lower layers look up in a dictionary or a symbol table, or [the name of] some sort of class object, or whether it is, e.g., an IP address. The only properties the application cares about is that it is a handle that it got from somewhere that satisfies criteria it specified or that were specified for it.

 The challenge to those
of you who are for, or against, SL at the IP level is to
justify  it in a context in which applications really don't
need to know  anything about them, or other address scope/
reachability/  routability issues except through the
addressing-independent  abstractions that we can agree on.

I don't care if it is TCP-ng, or something else between IP &
the app that takes care of figuring out the topology
difference, but signaling by itself won't solve the problem of
literal referrals. If the app is going to insist on passing
around topology information, it has to make sure that matches
the topology being used.

Well, at least intuitively, I agree with you about signaling. In most of the application contexts I can think of, signaling has more to do with the efficiency of communicating "you lose, maybe you should try something else" from lower in the stack to an application. What is important, IMO, is not losing, rather than telling me more efficiently that I have. But (and I am not speaking for Daniel here) it also isn't the point. The notion of an application-opaque handle, whose semantics are invisible to the application but defined by stack layers that have access to whatever information is really needed, is, by contrast, exactly the point.

If the applications don't
need to know, and can function in a multiple-address-per-host
environment without --in the application-- having to
determine  which one to use by some type of iteration
process, then you  need to justify specialized addresses only
in terms of their  requires lower in the stack.  If the
applications do need to  know, then the complexity costs
appear to be high enough to  present an insurmountable
barrier.

The current IPv4 network already requires this of
applications, the developers simply choose to ignore reality.
There is nothing different in a unique prefix for local use,
other than the ability for the app (stack) that chooses to
look to figure out that some pairings won't work. If the app
does as it currently will and passes an out-of-scope address,
the application will fail. This is not a new requirement, it is
simply exposing the fact that applications have been ignoring
reality for a long time now.

See above. Applications have gotten away with ignoring that reality because the occurrences have been infrequent -- with one important class of exceptions, we have had few machines with multiple addresses (and multiple interfaces) since routers became common in the network. The exceptions have been larger web hosts which support pre-1.1 versions of HTTP and hence use one address per DNS name. But, for them, hosts opening connections to them use the address that matches the DNS name (hence no need to make choices or understand topological information) and the servers either use "address matching the one on which the connection was opened" or an arbitrary address on the interface --since the interface is the same, and its connectivity is the same, it really makes no difference. If the reason for multiple addresses per host (or interface) in IPv6 is to support different scopes (or connectivity, or multihoming arrangements), then it does make a difference, and will make a difference for a significant number of hosts. And _that_ implies a new requirement.

If there are other ways to
mitigate the issue, I am all for developing them. My primary
issue is that there are a variety of things people want to use
SL for and removing an existing mechanism without appropriate
replacements for all of them first is an irresponsible act.

Tony, there is a difference in perspective here. I'm going to try to identify and explain it, with the understanding that it has little to do with any of the discussion above, which I think is far more important. From your point of view, as I understand it, this feature has been in IPv6 for a long time, no one questioned it, or the problem(s) it apparently solved, for equally long, some implementations were designed to take advantage of it, and now people are coming along and proposing to remove it without a clearly-better solution to address those solved problems. From the viewpoint of many or most of the applications-oriented folks on this list, and maybe some others, the applications implications of the SL idea (and maybe the "multiple addresses" idea more generally) are just now becoming clear. What is becoming clear with them is that the costs in complexity, in data abstraction failures, and in damage to the applications view of the hourglass, are very severe and, indeed that, from the standpoint of how applications are constructed, SL would never have worked.. From that perspective, when you argue that applications are already doing bad things, the applications folks respond by saying "but IPv6 should make it better, or at least not make it worse".

Those differences lead to discussions about religion and ideology, which get us nowhere (although they generate a lot of list traffic). It is clear to me (from my particular narrow perspective) that our getting to this point at this time indicates a failure on the part of several generations of IESG members (probably including me). It also identifies a series of issues in how we review things cross-area (or don't do that successfully) and reinforces my perception that shifting the responsibility for defining standards away from a multiple-perspective IESG and onto WGs with much narrower perspectives would be a really bad idea.

But, unfortunate though it may be, we have gotten here. We differentiate between Proposed and Draft standards precisely to make it easier to take something out that doesn't work, or --more to the point in this case-- doesn't appear to do the job it was designed to do at a pain level no worse than what was anticipated. I don't think essentially procedural arguments about how much proof is required to take something out get us anywhere at this stage. Instead, we should be concentrating on the real character of the problem we are trying to solve and ways in which it can be solved without doing violence to whatever architectural principles we can agree upon.

     john






<Prev in Thread] Current Thread [Next in Thread>