Daniel Senie writes:
Separately, if there is genuine interest in addressing the
scoping problem, I suggest that be addressed separately.
I guess it depends on what you define as "the scoping problem". If the
problem is that the network can impose apparently arbitrary restrictions
on whether hosts can communicate between point A and point B (for
arbitrary A and B) I don't see that as an architectural problem, nor
something that should be the responsibility of applications to solve.
and if you define the problem as that hosts are expected to be able to
simultaneously have access to multiple scopes that must be accessed via
different source addresses or different network interfaces or different
tunnels, and to be able to provide connectivity to each of those for
applications on that host, I still have to question whether giving each
scope a separate address prefix is a constructive way to implement that.
so is there a definition for "the scoping problem" that you have in
mind?
John Klensin writes:
Tony, I think this pretty well reflects where I've found myself
going on this, fwiw. From an applications standpoint, the
scoping issues, and what is, or is not, "topology information",
just obscure the problem. "The problem", from an applications
point of view, is that, if we are going to stop pretending that
the address space is completely flat and global --and I agree
that to do otherwise is unrealistic at this stage-- then we need
a model by which the application can specify to the stack what
it needs/ expects, and the stack can tell the application
whatever the latter really needs to know about what is
happening.
I don't see why it's unrealistic to have a global address space that
encompasses all hosts that are connected any network that is connected
to the Internet, and to expect applications to use that global address
space. I agree that we cannot expect complete or near-complete
connectivity.
In my mind, the reason we need feedback from the network to applications
about when applications try to violate administrative prohibitions on
use of the network is not so that applications can try to route the
messages through other paths (though it does enable that to some
limited degree) but so that applications can provide accurate
indications to their users as to why they're failing.
ICMP, as now defined and used, won't help with the
problem for a reason much more pervasive than the one Daniel and
others have identified: most of our current applications have no
interaction with either ICMP or IP -- they call on TCP or UDP
functions and don't know about the internet layer. The Internet
layer doesn't have any path (by "signalling" or otherwise) to
pass information back to the apps.
I don't see why TCP and/or UDP stacks can't provide such interfaces
to applications, even though of course this means that there will need
to be other interfaces (invisible to applications) between TCP and IP
and UDP and IP to pass that information upstream.
So, I would suggest that we really do have a problem here.
Unless we have a realistic proposal for a routing fabric that is
not dependent on the addresses used, the solution-set almost
certainly includes being able to use multiple addresses per
host, with different semantics associated with those addresses.
the solution set to what? I don't see what problem this is attempting
to solve, but I do see lots of problems that this creates.
As the competence level of the average ISP declines (which seems
to have been a clear trend for the last several years),
multihoming (in some form) needs to increase as the only
satisfactory alternative... and it better not be only for the
rich or the huge. The idea of a single, exclusively-global,
flat address space has been history for years; we clearly aren't
going to get back there as long as different providers charge
differently for different paths (independent of any of the
enterprise localization, firewall, RIR-granted PI space, or
other issues).
radical idea: we need to get away from the notion that the IP address is
a path specifier and back to the notion that an IP address is a
location or interface identifier or (possibly virtual) host identifier.
But, unless we are prepared to discard the model of a layered
stack architecture, or accept our applications becoming as (or
more) complex and knowledgeable about network architectures as
switches get in the PSTN, then we should be looking at stack
abstractions that permit applications to express their needs,
and get information back, without deducing network topology,
transport or mobility economics, etc.
I don't think these things need to be provided in the "stack" at all -
for the same reason that apps don't want to cope with them - neither the
host nor the app has the information needed to make those decisions.
and lacking a uniform address space, there's no basis for lower layers
to make the decisions. (for some reason a potentially-changing set of
IP addresses doesn't strike me as a good endpoint identifier for any
layer) so we need a uniform location name space to be used by higher
layers, and we need for the network to make the path selection
decisions. now maybe these decisions need to be made at border routers
rather than core routers (so that finer details of path selection are
handled in the periphery where you get better scaling, and the core
routers just forward things along pre-determined paths) but you don't
want to push the routing information all the way to hosts.
Keith