ietf
[Top] [All Lists]

Re: where the indirection layer belongs

2003-09-02 07:09:14
Dear Keith Moore,

Thank you for your reply. It seems that we are without a forum though, since what we are discussing is, according to Tony Hain, not in line with the IPv6 working group charter. Maybe we really do need a new working group for this issue. Should we propose the formation of one?

Keith Moore wrote:
These "capabilities" should be regarded as bugs which are being fixed.

In particular, the fact that IPv6 hosts can, in ordinary circumstances,
have multiple addresses has led people to believe that it's reasonable
to expect IPv6 apps to deal with an arbitrary number of addresses per
host, some of which work to send traffic to the destination and some of
which don't, and to have this behavior vary from one source location to
another.  First, nobody has ever explained how these hosts can reliably
determine which addresses will work.  Neither source address
selection nor multi-faced DNS are satisfactory answers.  Second, this
robs apps of the best endpoint identifier they have.


I agree that the "bug" in this picture here is that nodes can have multiple addresses, some of which work and some of which don't work in different circumstances. It really should be that an address which is advertised for a node should be valid for communication with that node under all circumstances.

I disagree with you if you are implying that the node should not be allowed to have multiple addresses on an interface, and I never agreed with the notion that the IP adress should double as a node's identifier or as the identifier of anything other than the interface with which it is associated. If the IP address as the identifier of the endpoint is the reality with which we must live, then I can live with that, but don't ask me to consider that the better circumstance.


No, it's not worthwhile.  Any kind of routing needs to happen below the
transport layer rather than above it.  That's not to say that you can't
make something work above the transport layer, but that to do so you
have to re-implement routing, acknowledgements, buffering and
retransmissions, duplicate suppression, and windowing in this new layer
when transport protocols already provide it.


I never said anything about forcing the application to talk directly to the network layer as you seem to imply by your second sentence. It might have come out wrong, but what I was trying to say was that for applications that need it, what is wrong with there being a standard (choose your favourite name) adaptation or stabilisation service or interface or protocol between the application and the transport layer (like Tony Hain suggests)?

Besides, even if you mean that the presence of such an stabilisation layer (to use Mr. Hain's nomenclature), would require the implementation of routing, acknowledgements, buffering etc., you are not necessarily routing, acknowledging, retransmitting etc., the same data that the transport is, are you. You might be doing that for higher level objects where the transmission of any one might have required the establishment, usage and teardown of one or more transport connections. Do you find I am engaging in excessive sophistry if I were to argue that if that is what the application needs then that application should be able to implement it?


Good question.  My best answer so far is: stable enough so that the
vast majority of applications don't have to implement additional logic
to allow them to survive broken TCP/SCTP/etc. connections, or (to put it
another way) stable enough so that failures due to address/prefix
changes are not a significant source of failure for most applications
(as compared to, say, uncorrectable network failures and host failures).

Am I to presume that you are not including in your categorisation of uncorrectable network failures and host failures, the possibility of one "home" of a multihomed host going down, while still leaving the host reachable by one of its other "homes"? As far as I can tell, none of that is well addressed as yet.


IMHO, apps should be able to assume that an advertised address-host
binding is valid for a minimum of a week.  This is a minimum - it
should be longer whenever possible.  (however there's no requirement to
maintain addresses longer than the nets will be accessible anyway -
i.e., you don't expect the addresses for the ietf conference net to
remain valid after the net is unplugged...but they shouldn't be reused
within a week either.)


I have no real objection to the "address-host binding" being valid for a minimum of a week or any duration greater than two round-trip times node-to-node. But an address-to host binding isn't really one, is it? By that I mean that the real and effective binding is to an interface. If I may quote RFC 1883, it defines an address as follows.

"   address     - an IPv6-layer identifier for an interface or a set of
                 interfaces."

While we have regarded the node and its interface as one and the same, it really isn't, even though we can get away with that most of the times. My point is that we haven't even considered the other half of the problem as far as the application is concerned. We also need a higher level object which we can associate with the node itself or the application endpoint and with which we can associate an identifier. Such an association will need to remain valid at least for the duration of the application's run-time, I believe.


And the second question is what should be the context of that
identifier's validity?


The identifier should be unique within the entire Internet.  In some
cases (e.g. small networks without connectivity to the Internet core)
"very likely to be unique within the Internet" would probably
suffice.  That doesn't mean that you can send to any point from any
other point, because there will still be access controls.  But if an
address is advertised for a host, and you have permission to send to
that host, you should be able to use that address to send to that host.


If you accept the statement I made above, the minimum context of that association's validity should be the collection of nodes that are currently participating in that application's protocol. I then have no fundamental disagreement with your assertion that such an identifier should be valid within the entire Internet.


The first one I explained above in slightly more detail.  The second one
should be obvious.  If an address becomes invalid because of a topology
change somewhere distant in the network, how is a layer above layer 4
going to know about it?  It doesn't have access to routing protocol
updates - those happen at layer 3 and aren't normally propagated to
hosts anyway.  When you think about it, you probably don't want hosts to
have to listen for information about distant state changes in the
network - that would place a tremendous burden on small hosts and nets
with low-bandwidth connections.

The second one becomes obvious only if you restrict the definition of endpoint to the interface. If instead the definition of the endpoint is taken to be the process or object that is supposed to be the final recipient of the data that is forwarded by the IP and the transport layers, then it is no longer so obvious. Sometimes I wonder if we aren't treating together concerns or problems that we should be treating separately.

I don't want *application processes* to be needing to listen for information about distant state changes in the network, but I do want my application processess to be able to adapt or recover when something happens that affects its communication with a peer *process* while neither of the cooperating processes has indicated any intention to stop participating in that communication.

Yours sincerely,
Robert Honore.








<Prev in Thread] Current Thread [Next in Thread>