Dave and I appear to be in agreement here.
I have spent quite a bit of time thinking what the consequences of a default by
a registry of the IANA, ISBN, EUI (aka MAC address) type. My conclusion is:
virtually nil.
Let us imagine that the folk who assign EUI numbers decide that they are not
going to allow Iran or North Korea to register a number. This is far from
theoretical, in fact it is US law today.
The Iranian manufacturer simply announces that they are going to use a
particular prefix and starts making ethernet cards. If you are another
manufacturer there is simply no way that you are going to accept the
unauthorized Iranian assignment.
Similar strategies apply to IANA assignments, including assignment of IPv4
address space. It is best if nobody attempts to employ IANA as an Internet
choke point, it is not.
The issue is unlikely to come up unless the resource being allocated is finite:
Port numbers, DNS RR codes, protocol numbers.
My personal view is that we should develop an Internet architecture that allows
an infinite number of new protocols to be deployed without consumption of
scarce resources, i.e. port numbers of DNS RRs. I think that it is entirely
defensible for IANA to be parsimonious with such assignments because they are
essentially the fabric of the Internet.
Assignment of non-finite identifiers should be a free for all. Anyone should be
allowed to apply for a text based label (i.e. algorithm identifier, SRV prefix,
ASN.1 OID) at any time with no process whatsoever.
Until now we have been dealing with application protocols that almost
exclusively deal with the human-machine interaction: Email, Web, FTP, NNTP.
Only a handful of protocols are machine-machine. This is changing with Web
Services/Mashups/Semantic Web. In the future we are going to see a vast number
of machine-machine protocols, only a tiny proportion of which will be standards
based.
The lack of standards is a good thing, neither the IETF, W3C or OASIS or all
three combined is going to have the bandwidth to standardize everything. And
unlike human-machine protocols the cost of running multiple machine-machine
protocols in parallel is not unacceptable as a general rule. Moreover premature
standardization is a bad thing, particularly when nobody really knows what the
protocols should be doing.
So in summary, the IAB should be charged with identifying the set of finite
resources that IANA assigns and propose an Internet architecture in which
deployment of new application layer protocols does not cause any of the finite
resources to be depleted.
-----Original Message-----
From: Dave Crocker [mailto:dcrocker(_at_)bbiw(_dot_)net]
Sent: Tuesday, June 12, 2007 11:49 AM
To: John C Klensin
Cc: ietf(_at_)ietf(_dot_)org; iesg(_at_)ietf(_dot_)org
Subject: Re: IANA registration constraints
John C Klensin wrote:
Again, there may be exceptions, but I think denial cases should
require fairly strong (and public) justification. In the general
case, I believe the Internet is better off if even the most
terrible
of ideas is well-documents and registered --with
appropriate warnings
and pointers-- if there is any appreciable risk that it will be
deployed and seen in the wild.
Mostly, I think we (the community) tend to confuse the
coordination role of registration with the approval role of
standardization.
d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf