ietf
[Top] [All Lists]

Re: Solving the right problems ...

2003-08-24 14:51:10
[trying to keep this as brief as possible]

In the ongoing saga about topology reality vs. application perception of
stability, it occurs to me we are not working on the right problem. 

agree.

We all agree that applications should not be aware of topology. At the same
time,  application developers insist on the right to pass around incomplete
topology information. 

Let's rephrase this in a less loaded way.  Application developers need
reasonably stable and reliable endpoint identifiers.  At present, the IP
address is the closest available approximation to such an identifier. 
Therefore apps developers don't want to give up the ability to pass these
things around and expect them to work from other locations in the network,
until such time as a suitable replacement is available and widely deployed.

(Note that even after such a replacement exists, there will still be some apps
that need to pass around locators - network management tools in particular.
But they will have less need to insist that those locators be usable from
other network locations.)

In any case, what applications need is a stable reference to other
participants in the app. Yet we know that current and projected reality says
that topology is not stable, or consistently reachable. 

agree.

Either way, the
network as seen by the transport layer is not, and will never be stable or
consistently reachable from everywhere. Given that, the direct interface
with the transport layer creates a failure mode for an application expecting
stability. 

Where this leads us is to the sacred invariant, and the need to solve the
problem in the right place. I suggest it is time for the IETF to realize
that the protocols are no longer being used in the network of 1985, so it is
time to insert a formal stabilization layer between application & transport.

Disagree.  The layer needs to be between the current network and transport
layers.  If the layer were between transport and applications, many of the
services provided by transport would end up having to be re-implemented in
higher layers.  It would also force routing decisions to be made by layers
much more distant from the network layer that maintains reachability and state
information than the transport layer.

The next place that leads us is to a name space. At this point I see no
reason to avoid using the FQDN structure, but acknowledge that the DNS as
currently deployed is not even close to being up to the task. 

Disagree.  A big part of the reason DNS is not up to the task is that its name
structure is a very poor fit for this purpose.  There is no need to have these
names be human-meaningful or to have them delegated along the lines that DNS
is delegated.  There is already a lot of semantic loading of DNS names (many
would say they're too overloaded already) which would strain their ability to
be used in this new layer.  There is a large benefit to having these names be
terse and fixed-length (though perhaps not as important or useful as it is for
IP addresses, and having a variable length name for endpoints would provide
some  additional flexibility).  There is also a benefit to being able to reuse
existing transport protocols in terms of these new names, though that can be
finessed.  Finally, using 128-bit identifiers that were API and protocol
compatible with IPv6 locators (but distinguishable from them) might be very
useful in smoothly transitioning existing stacks and apps.

Then there's the problem that DNS's protocol and replication models aren't
really a good fit either, and this hinders any effort to make it suitable
for identifier-to-locator mapping.

(Admittedly, if I thought that the new layer should go between transport and
apps then DNS-like names would make a tad more sense.)

Since many networks have usage policies established around the sacred
invariant, there will need to be some recommendations on how to apply those
policies to this new layer. We could even consider a protocol between this
layer and a policy entity that would aggregate applications into a policy
consistent with the what the network can deliver for various transport
protocols. This distribution of policy awareness would have much better
scaling characteristics than per app signaling, as well as the ability to
locally group unrelated transports that any given app might be using.

Lots of work is needed in the policy area.  But being able to express policies
in terms of stable and reliable host identifers, instead of expressing them
exclusively in terms of attachment points or links, would help a great deal.

Bottom line is that we are spending lots of time and effort trying to force
fit solutions into spaces where they don't work well and end up creating
other problems. We are doing this simply to maintain the perception of
stability up to the application, over a sacred interface to a very unstable
network. Stability to the application is required, but forcing applications
to have direct interaction with the transport layer is not. Yes we should
continue to allow that direct interaction, but we should be clear that there
are no guarantees of stability on that interface. If the application does
not want to take care of connections coming and going to the same device at
potentially different places, it needs to have an intermediate stabilization
layer available.

It appears that we agree on much of the above, especially if you leave out 
loaded words like "sacred".  It needs to be understood that the needs of apps
are legitimate, even while admitting that these needs are stretching existing
network layer services beyond their capabilities.

This was a constructive post.  Thanks.

Keith