I also have the experience that a too strict policy
can be harmful. I could cite numerous examples...
On the other hand, Thomas and Pasi are also right that
too loose policy can be harmful.
You have to remember that interoperability
can be hurt or improved in many ways. Secret
code allocations, private specifications, etc. are
indeed a problem. But so are incompatible
extensions, multiple ways of doing the same
My experience of the current number system is
that it is actually working fairly well. There are
exceptions, but by and large numbers are
allocated and used as they should be. I know
we do not have the IEPF (Internet Engineering
Police Force) to send when someone uses a number
against our approved RFCs, but at least in my
experience in Internet layer matters such allocations
are relatively rare. I would be interested in actual
data about this, if anyone has some, however.
I would like to see a model that has an appropriate
level of strictness... and I have a model in mind.
I do not like the idea of wholesale redefinition of
who can allocate numbers or what the various
RFC 2434 phrases mean. The different number
spaces are different, and we need to apply
different criteria for allocations in them. For
instance, its a completely different thing to
create new optional-to-recognize data attributes
than new message types; IP protocol numbers
are a scarce resource whereas some other resources
are not, etc.
And, appropriately, authors and working groups
have been laying out the rules in the IANA Considerations
section for years and years about what allocation
policies are right. I think we need to respect the
wisdom of the WG to decide on a policy issue in
their protocol. We should continue to give the power
to the working groups on this issue.
At the same we need to recognize that we've made
mistakes in this space in the past, either in the WGs
or through IESG or AD requirements. In many cases,
policies have been too strict. In some cases they have
been not defined well enough to actually work. In
yet other cases they have been too loose. Or silly,
such as the policy in RFC 2780 about IPv4 protocol
number allocations involving NDAs.
So here's my proposal:
1) Design new protocols in a way that mere field
size is not an issue. I think we've been mostly doing
this since mid-90s.
2) Make sure WGs think hard about the various
interoperability tradeoffs and other issues when
they write their IANA considerations sections.
3) Involve the WG chairs and ADs in following what
is happening in the real world, and to take action
if what is deployed out there starts to differ too
much from what either the IANA registry or the
IETF RFCs describe. For instance, we had this
situation in EAP WG a couple of years ago, and
started a program to make sure all EAP methods
were in the IANA table and described in RFCs,
created a new WG, AD sponsored some specifications,
offered reviewers and IESG support if people took
their specifications to the RFC Editor, etc.
4) Make sure there is ample space for experimentation
and research. Often there isn't. Consider publishing
an update to make this happen. We did this for many
IP layer numbers in RFC 4727, for instance.
5) Add sufficient mechanisms for vendor-specific or
6) Periodically review the existing IANA rules for your
protocols and consider revising them for the right
balance. Again, all-strict and free-for-all are probably
the wrong policies; different numbers need different
treatment. Getting the balance right may not always
be easy, but its worthwhile. Taking another example
from EAP, we went from free-for-all to no-allocations-until-
wg-revises-base-spec to expert-review in the EAP
method space. The expert review model has worked
well for this particular purposes, because it does not
block allocation or make unreasonable requests,
but it does ensure there's documentation about the
method and that the documentation answers the
7) Keep relatively strict rules on number spaces that
Ietf mailing list