ietf
[Top] [All Lists]

On IETF policy for protocol registries

2016-01-18 16:08:14
Protocol registries play an important part in the Internet. If two
parties attempt to use the same code point to represent different
concepts, the protocol may break. Or worse, may appear to have
succeeded when it has actually broken. Contrawise, if two parties are
both capable of understanding a particular concept but have different
name for them, they can't interact even though they have all the code
they need.

While all protocol registries have the function of providing on
ontology [1] of unambiguous common terms. Some registries have other
functions. These may include preserving a small number of scarce code
points and 'protecting the Internet'.

The management of protocol registries are something that the entire
IETF community has a stake in, not necessarily just the Working Group
that originally developed.

Registrations that require a published specification or expert review
or an RFC certainly provide benefits in certain situations. But there
are definitely costs. On past occasions my requests for code point
assignment have in some cases taken more than a year and required me
to do quite a bit of work managing the application. And every process
that requires a published specification prevents that mechanism being
used for proprietary protocols.

[As an aside here, I will point out that even if you think every
Internet protocol should be open as a matter of course, there are
often very good reasons to avoid premature publication. Even if the
ultimate objective is an Open Standard, the initial draft is likely to
be a proprietary proof of concept.]


It it my position that the degree of review required for an assignment
is something that should flow from the Internet architecture and in
particular the ideas behind the End-To-End principle rather than the
opinions of the people who happen to have worked on a particular
platform. When I write a Web Service with a HTTP binding, I am using
HTTP as a platform, that does not give the HTTP WG the right to look
over my shoulder and second guess any more than the TCP designers
would if I was using raw TCP.

Rather than being an argument as to where certain functions should
occur, the End-to-End paper actually discusses the consequences of two
approaches to managing complexity. If you are designing a network that
has a single purpose that is fixed for all time, then putting the
complexity in the center of the network allows a lot of opportunity to
optimize for that application. The client end points can then be made
very simple. If however, you want a network that is not limited to
just one function and is capable of adapting to many different
functions then it is best to make the network as simple as possible
and keep the complexity at the endpoints.

So one consequence of the end to end principle, which I fully endorse
is that there are legitimate reasons to have a high barrier for code
point assignments at the lower layers of the stack regardless of
whether the number of code points is limited or not. In particular,
proposals for a new version of Internet Protocol need to be considered
with very great care because they have the potential to add complexity
to the network core. Proposals for new transport or routing protocols
likewise demand scrutiny.

At the application end of the stack, the reverse principle should
apply, not as a consequence of the End to End principle but as a
direct consequence of the goals that the end to end principle was
meant to serve. The whole point of the Internet was to set people free
to develop new ways of using networks, to explore new ideas, and most
important of all, to have the possibility of failure. Because if you
don't ever permit something that might fail, you will never permit
anything that might be important.

Regardless of whether you agree or disagree with my particular
argument, I think that it is clear that this is a matter that the IETF
as a whole should decide not merely the platform provider.


In recognition of these principles, the IETF decided to change the
designation of the Well Known Ports registry to First Come First
Served some time ago. I believe that the same designation should apply
to every application level registry unless there is a very specific
and fully documented reason not to.

In 24 years of participation in IETF, I have frequently encountered
cases in which people have raised intestinal arguments as to why
something should not happen. In each and every one of those cases the
'gut feeling' they claim has turned out to be completely and utterly
wrong.


At present we have a registry that is critical to Web Services which
is 'specification required'. I would like to have this changed for the
reasons stated above and because the registry is functionally
redundant.

In the original Internet architecture, The DNS was used to identify
the address of a host and the TCP or UDP port number was used to
identify the application protocol a client was requesting the host
participate in.

RFC 2052 (1996) introduced the SRV record which provides a mechanism
for locating Internet services. Instead of using port numbers to
identify the protocol to be used, the protocol name from the Well
Known Services registry is used instead. This has a number of
important advantages, not least being port conservation. It is no
longer necessary to issue a new port number for every new protocol.
But equally important for my purposes, the SRV record provides the
fault tolerance and load balancing capabilities of the MX record.

Had the SRV record been defined in 1992, we would have used it for
HTTP. In fact something of the sort was developed at NCSA with their
round robin DNS record hack out of extreme need. Now that we are doing
Web Services, the use of an SRV like record has obvious benefits over
A/AAAA.

Note that a Web Service is merely a protocol that happens to use HTTP
as transport. Some Web Services are limited to information retrieval
but many are not. In particular, the Web Services I design are
typically using multiple layers of encryption/authentication, they are
not idempotent, they have side effects such as causing robots to move
or data to be published.

There are two major advantages to using HTTP as the transport, one is
that the HTTP ports are most likely to be open at the firewall level.
Another is that there is an infrastructure for managing multiple HTTP
services on a single machine on the commonly used platforms. So for
example, in the .NET,and .NETCore frameworks, a program can register
to receive and respond to requests sent to a particular http:// prefix
in the same way as they might register to service a TCP port. Apache,
nginix, IIS provide similar capabilities.

When using SRV discovery to locate a HTTP Web Service, a problem
arises. How does the client identify the Web Service Endpoint on the
destination host?

Patrik Falstrom proposed the URI record a while back as one way to do
this. This record specified a URI rather than a domain name and port.
But it turns out that most hosting providers now know to support SRV
but few support URI. Also any attempt to use a DNS discovery mechanism
in the real world has to include some mechanism that falls back to
only using A/AAAA and CNAME lookups or access will be blocked in a
non-negligible number of network locations.


RFC5785 specifies a registry for prefixes in the /.well-known/ space
of a HTTP server.

So for example, I have registered mmm as the SRV prefix for the
Mathematical Mesh portal protocol. This is used to resolve
transactions that are bound to an account identifier in RFC822 style
format. e.g. alice(_at_)example(_dot_)com.

A service provider might advertise service on host1 and host2 with DNS
entries as follows:

_mmm._tcp.example.com  SRV 0 20 80 host1.example.com
_mmm._tcp.example.com  SRV 0 80 80 host2.example.com
mmm.example.com CNAME host1.example.com

It is natural for the client resolving alice(_at_)example(_dot_)com to use the
following Web Service Endpoints:

http://host1.example.com/.well-known/mmm/
http://host2.example.com/.well-known/mmm/

In effect we are providing the SRV prefix to the HTTP server using the
URI request line in the same way that we use the Host: header to tell
the server which service is being accessed (example.com in either case
as following the prcedent set for CNAME lookup. we give the original
DNS query name, not the internal DNS translations).

Now people may or may not like this particular proposal. Heck, I might
not even like it after I have used it for a while. But it is certainly
based on the Internet architecture to the extent any of it has been
written down. It is consistent with current practice and with the
requirements of the RFCs I have read. Nobody who has objected to this
approach has ever given me a technical argument as to why it is wrong.

My problem is that while the SRV registry is first come, the
.well-known registry is 'specification required'. This creates two
problems:

1) It is quite possible that following current registration practices,
someone else might apply for mmm and the registration would be
granted. And then my only recourse might be a lawsuit.

2) I may not be able to provide the specification, either because the
protocol is experimental or proprietary.

Having the name of the protocol be different in the DNS and HTTP
spaces is utterly unacceptable to me. Equally unacceptable is that
someone else might register the name of my Web Service. This is not a
constrained name space, the only reason for doing it deliberately
would be spite.

Once it is recognized that both registries serve the same purpose,
namely to identify protocols to prevent collisions, it is obvious both
should have the same registration criteria. If as some people have
asserted, their are mysteries of the HTTP protocol that require expert
attention, these are not known to me as one of the original
contributors to that protocol and incidentally the first person to
write a working Web Service since the POST method was utterly broken
until I fixed it. But even if such issues did exist, the remedy should
be to fix HTTP rather than throw up obstacles for people trying to be
polite when they are using it.

At the end of the day, requesting IANA registrations is a matter of
politeness and nothing more. The people that the net does need
protecting from don't know to ask or they know to ask but don't
bother.


Like the original Internet architects, I believe in as much
experimentation at the application layer as possible. Otherwise, I
would not have spent the past 3 years building an infrastructure
designed to make cryptography easy for everyone to use. Contrary to my
critics in governments, I am not oblivious to the consequences of my
work.

If however, people think that the registry should remain as it is,
then there is going to have to be further action by IETF to ensure
that the process does meet the requirements of being open.

Specifically RFC 5226 specifies an appeals process but does not state
that there is a requirement to tell parties accepting a registration
that they have a right to make an appeal or under what circumstances.
In particular, what happens when the DE does not respond in a timely
manner? What happens if they acknowledge the request but neither
accept or reject it. My biggest delays have come from the case where
the request has been accepted by the DE but the DE has failed to
assign the actual code point in a timely manner.

PHB


[1] Here I am using the term 'ontology' in the AI sense of a 'shared
vocabulary' and not as a system of being which is the definition in
philosophy.