ietf
[Top] [All Lists]

Solving the right problems ...

2003-08-24 13:31:57
In the ongoing saga about topology reality vs. application perception of
stability, it occurs to me we are not working on the right problem. In short
we have established a sacred invariant in the application / transport
interface, and the demands on either side of that interface are the root of
conflict.

Mobile IP, and the multi6 DHT work are attempts to mitigate it through
slight of hand at the IP layer, while SCTP attempts to mask the topology
reality in the transport layer. (These are probably not the only examples,
but they are the ones that come to mind this morning.) Yet none of these
really do the job in all cases.

We all agree that applications should not be aware of topology. At the same
time,  application developers insist on the right to pass around incomplete
topology information. There are arguments that the IP address is overloaded
as an endpoint name. I don't particularly buy that, because the real
endpoint is either a protocol, or a specific port of a protocol. From one
perspective, when applications use addresses in referrals, they are
specifying that routing through a particular point in the topology will
reach the desired endpoint named 'transport/port-id'. But that is a
semantics and perception discussion which doesn't help reach the goal.

In any case, what applications need is a stable reference to other
participants in the app. Yet we know that current and projected reality says
that topology is not stable, or consistently reachable. This could be due to
technology issues of the underlying links, or simple policy. Either way, the
network as seen by the transport layer is not, and will never be stable or
consistently reachable from everywhere. Given that, the direct interface
with the transport layer creates a failure mode for an application expecting
stability. 

Where this leads us is to the sacred invariant, and the need to solve the
problem in the right place. I suggest it is time for the IETF to realize
that the protocols are no longer being used in the network of 1985, so it is
time to insert a formal stabilization layer between application & transport.


Such a layer would be responsible for managing intermittent connectivity
states and varying attachment points of the transport layers below.
Applications that interact with this layer would be insulated from the
inconsistencies being experienced by the transport layer. It would also be
reasonable for this layer to manage multiple simultaneous transport
interactions so that the application perceived a single data path to its
peer. With appropriate trust between the stack and the network policy, this
would even simplify the process of QoS markings for parallel related data
sets. 

The next place that leads us is to a name space. At this point I see no
reason to avoid using the FQDN structure, but acknowledge that the DNS as
currently deployed is not even close to being up to the task. The protocols
are not so much the issue as the deployment and operation model that is
focused on a limited number of nodes operated by a small community of
guru's, where the expectation is that any changes occur on the order of
several days or longer. What we ultimately need in a name service to support
the suggested layer is the capability for every consumer device with
electrons in it to automatically and dynamically register its current
attachment information with rapid (that doesn't mean sub-second, more along
the lines of a cell phone that powers up away from its home) global
convergence. As there are multiple trust boundary issues involved (because
not every device will be subscribed to a service), making this scale will
require pushing the database distribution out to smaller pockets. Making it
reliable for Joe-sixpack will probably require that part of the
infrastructure exists on his side of the interconnect & trust boundary from
any other networks. Automating the attachment of a massive number of small
dataset servers will probably require something with better scaling
characteristics than the current DNSsec deployment model.

Since many networks have usage policies established around the sacred
invariant, there will need to be some recommendations on how to apply those
policies to this new layer. We could even consider a protocol between this
layer and a policy entity that would aggregate applications into a policy
consistent with the what the network can deliver for various transport
protocols. This distribution of policy awareness would have much better
scaling characteristics than per app signaling, as well as the ability to
locally group unrelated transports that any given app might be using.

Bottom line is that we are spending lots of time and effort trying to force
fit solutions into spaces where they don't work well and end up creating
other problems. We are doing this simply to maintain the perception of
stability up to the application, over a sacred interface to a very unstable
network. Stability to the application is required, but forcing applications
to have direct interaction with the transport layer is not. Yes we should
continue to allow that direct interaction, but we should be clear that there
are no guarantees of stability on that interface. If the application does
not want to take care of connections coming and going to the same device at
potentially different places, it needs to have an intermediate stabilization
layer available.

Tony