ietf
[Top] [All Lists]

RE: A follow up question

2003-04-23 12:31:19


--On Wednesday, 23 April, 2003 14:28 -0400 Daniel Senie <dts(_at_)senie(_dot_)com> wrote:

At 01:12 PM 4/23/2003, you wrote:
So to clarify the question, do you believe the establishment
of a set of local use prefixes is the root cause of the
unsolved problems that applications developers are
complaining about?

There will always be addresses which are administratively
scoped, regardless of whether those addresses are from global
space, or from RFC1918 (or equivalent) private space. That
scoping will be instituted by policy at border routers,
firewalls, and even filtering mechanisms on hosts (which may
be outside the view of the applications, such as how IPTables
operates on *nix machines.) The IPv6 mechanism to give hosts
a-priori knowledge that the "site local" block is flawed in
that it cannot truly provide the scoping information, since it
does not address the wider issues associated with
administrative address scoping. Indeed, the site local
mechanism reminds me in a way of Steve Bellovin's "evil" bit
in his RFC published on the first day of this month.

Yes!

There will remain a need and desire for private address space,
be that the assigned "site local" block (without the "special
treatment" in the stacks), RFC 1918 space, or a combination. I
think it would be useful to decouple the issue of the special
treatment of the Site Local address block from the religious
war over whether private address space and other mechanisms
sometimes associated are beneficial or not. In reviewing the
recent discussion, it is clear the two are being intertwined,
and it appears to be adding to the heat, and producing no
light.

Also Yes.

Separately, if there is genuine interest in addressing the
scoping problem, I suggest that be addressed separately. A
proper effort might encompass mechanisms to deliver
indications to applications as to the scoping limitations
causing communications to be blocked, as well as wire protocol
issues to carry such signalling. In a broader sense, there is
a need to deal with signalling issues as well. There are
network operators, firewall vendors and network administrators
who've been taught ICMP packets are inherently dangerous and
must be filtered. The work output of such efforts should span
the Internet Area producing standards track documents to
specify how to properly implement mechanisms in hosts, routers
and firewalls, and the Operations Area to provide BCPs giving
guidance to network administrators and service providers on
the operational needs of such issues.

Yes!!!!

Tony, I think this pretty well reflects where I've found myself going on this, fwiw. From an applications standpoint, the scoping issues, and what is, or is not, "topology information", just obscure the problem. "The problem", from an applications point of view, is that, if we are going to stop pretending that the address space is completely flat and global --and I agree that to do otherwise is unrealistic at this stage-- then we need a model by which the application can specify to the stack what it needs/ expects, and the stack can tell the application whatever the latter really needs to know about what is happening. Of course, to do that, the stack may need to be able to pass directions to the network or make inquiries and get information back. But, to the degree to which we believe in layered protocol stacks at all (hourglass or otherwise), the application should be totally insensitive to where the stack layer(s) with which it communicates get their information.

Having the application try to figure this all out by breaking down IPv6 addresses (site local or otherwise) just isn't the right way to do it --at least without a really drastic rethinking of the architecture. That is partially because of the flip side of the arguments I've understood you to be making about scope: the application can't know enough in our current model to do the right thing, and knowing about how to handle one particular address prefix or format won't make a serious dent in the problem. ICMP, as now defined and used, won't help with the problem for a reason much more pervasive than the one Daniel and others have identified: most of our current applications have no interaction with either ICMP or IP -- they call on TCP or UDP functions and don't know about the internet layer. The Internet layer doesn't have any path (by "signalling" or otherwise)to pass information back to the apps.

So, I would suggest that we really do have a problem here. Unless we have a realistic proposal for a routing fabric that is not dependent on the addresses used, the solution-set almost certainly includes being able to use multiple addresses per host, with different semantics associated with those addresses. As the competence level of the average ISP declines (which seems to have been a clear trend for the last several years), multihoming (in some form) needs to increase as the only satisfactory alternative... and it better not be only for the rich or the huge. The idea of a single, exclusively-global, flat address space has been history for years; we clearly aren't going to get back there as long as different providers charge differently for different paths (independent of any of the enterprise localization, firewall, RIR-granted PI space, or other issues).

But, unless we are prepared to discard the model of a layered stack architecture, or accept our applications becoming as (or more) complex and knowledgeable about network architectures as switches get in the PSTN, then we should be looking at stack abstractions that permit applications to express their needs, and get information back, without deducing network topology, transport or mobility economics, etc. Maybe that to get there means that we need to revisit, not just ICMP as Daniel suggests, but the "no TCPng" decision... I don't know. But, if we can figure out a way to clean this up and make it work well, we could easily end up with a stronger justification for deploying IPv6 than anything (at least anything real) we have had to date.

Where does SL and its relationship to applications fit into this? Oddly, I don't think it does except in the very useful sense of having (finally) forced a broader range of people to think about the problem. From an application-writer/designer standpoint, I know I don't want to think about SL (nor do I want to think about RFC 1918 addresses or anything else that requires me to make decisions I don't have the needed information to make). I'm not opposed to those addresses existing, or in favor of them, I'm just committed to a network and implementation model in which they are kept below me in the stack. Consequently, the only thing I want to need to know about an IP address is how to pick it up from one source, carry it around opaquely, and ultimately pass it to something else or use it in an opaque way in a protocol transaction. The challenge to those of you who are for, or against, SL at the IP level is to justify it in a context in which applications really don't need to know anything about them, or other address scope/ reachability/ routability issues except through the addressing-independent abstractions that we can agree on. If the applications don't need to know, and can function in a multiple-address-per-host environment without --in the application-- having to determine which one to use by some type of iteration process, then you need to justify specialized addresses only in terms of their requires lower in the stack. If the applications do need to know, then the complexity costs appear to be high enough to present an insurmountable barrier.

 regards,
       john




<Prev in Thread] Current Thread [Next in Thread>