ietf
[Top] [All Lists]

Re: Last Call: 'Procedures for protocol extensions and variations' to BCP (draft-carpenter-protocol-extensions)

2006-09-06 11:46:51
On 9/6/06, Keith Moore <moore(_at_)cs(_dot_)utk(_dot_)edu> wrote:

HTTP proxies do exist but the only reason that they can work
effectively is that the vast majority of web resources are accessible
through a common medium - namely the public IPv4 Internet and TCP.

Right. But that is a natural occurrence, not the result of
bureaucratic demands for coordination. 

It doesn't matter how the conditions came about.  It's rather
arbitrary to say that a protocol cannot require TCP but it can require
message headers to be in a particular format.  The protocol spec
should outline a set of sufficient conditions for interoperability AND
market acceptance.  Providing too many ways of running a protocol
harms interoperability in various ways - not just by providing ways by
which two protocol engines can't talk to one another, but also by
introducing more states in which things can fail.

SMTP doesn't /require/ TCP
either, as I'm sure you know.

No.  But exchanging email between administrative domains without explicit
coordination generally _does_ require both TCP and IPv4.

Of course it's useful to be able to run SMTP, HTTP, etc. over other
transports for special purposes.  But a distinction needs to be made
between "SMTP specification" and "how to send Internet email", and
between "HTTP specification" and "how to make web resources available
to the public and how to access them".  In both cases the latter
imposes more necessary conditions than the former.

I don't see a correlation between protocol effectiveness and
concrete transport protocol dependencies. I also don't see a
correlation between mandated "universal" interoperability and
protocol effectiveness.

It depends on the protocol and the use cases for that protocol.  I've
found it useful in the past, for instance, to run the lpd protocol
over DECnet.  But by doing so I wasn't trying to give our VMS users
access to printers across the globe, just to our local printers.

Application protocols don't need to specify an entire protocol stack
to be successful.

That doesn't stand as a general statement.  Sometimes they do, because
there are too many possible choices and in the absence of a complete
specification there will never be a critical mass sufficient to
facilitate interoperability.  Sometimes they don't have to explicitly
specify the entire stack, but because there are obvious defaults for
the unspecified portions of the stack, interoperability happens
anyway.  

Some applications (e.g. ssh) are inherently two-party and/or tend to
be used only within local networks, and for these cases all that
really matters is that both ends can agree on a protocol stack and
there is a signal path between them that can run that protocol.
However the utility of many applications is dependent on there being
a large number of servers that arbitrary clients can talk to, and in
those cases too many degrees of freedom regarding which stack to use
degrades interoperability.


Keith

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>