ietf
[Top] [All Lists]

Re: IANA blog article

2014-01-04 11:15:42
On Sat, Jan 4, 2014 at 4:31 AM, Patrik Fältström <paf(_at_)frobbit(_dot_)se> 
wrote:


On 4 jan 2014, at 09:29, Jari Arkko <jari(_dot_)arkko(_at_)piuha(_dot_)net> 
wrote:

One important change is that every future application protocol proposal
should be required to have an effectively unlimited code space for
assignment.

Agree.

I do not agree regarding the term "unlimited".

What is needed is that the specification do have a consequence analysis
regarding expansion of use.


Please read my words as carefully as a wrote them.

There are excellent reasons why variable length IPv6 addresses would have
been a bad idea. That is why I separated considerations for IP layer, DNS
and Applications.

At the application layer, the concerns are very different. Variable length
identifiers are not a problem. We use them in RFC822, ASN.1, XML and JSON.

The only protocols above the bare IP layer where we don't use variable
length identifiers are IPSEC and TLS. But these are merely design mistakes
not to be repeated. Using fixed length identifiers in those protocols has
negligible performance advantages but even a 32 bit identifier space is not
big enough to make it a free for all. And so a technical choice creates an
ongoing administrative requirement.

The problem in TLS is worse because the algorithms are joined together in
suites and so there is a combinatorial issue. Since we have a key exchange
cipher, an encryption cipher and a MAC for each we actually have an average
of ten bits 32/3 per cipher. I can easily see us exhausting that.


What I would like is some architectural guidance from the IAB to the effect
that all future protocols MUST use an identification mechanism that puts
exhaustion of code points beyond the realm of practical possibility.

Since we are unlikely to be doing IPv7 any time soon and if we did it would
be a whole new ball game, this requirement would only apply to major
revisions of DNS, IPSEC, TLS like things and Application protocols.


I don't see an argument right now for a major revision of DNS. But if we
were to revisit it in a backwards incompatible way we would surely change
the protocol so that the number of possible resource records was more than
64K. The practical impact of the requirement is that the increase would
have to be from 64K to 'effectively unlimited' rather than to 4G (32 bits).

For other protocols the consequence would be to use one of the following:

1) Pseudo identifiers of 128 bits (or more), i.e. UUIDs
2) ASN.1 Object identifiers
3) Simple text labels (e.g. AES-128)
4) URIs


IP addresses will always be a special case because they have to operate
under some special constraints. We all use the same code point for
text/plain but we all have to use a different IP Address.


To respond to some points made by John and others later in the thread, mere
issue of a code point should not imply endorsement. I don't think that the
IETF should be in the business of endorsing cryptographic algorithms or
content types. There are certain cases where the IETF needs to be involved
in identifying the authoritative definition though. Is the definitive
definition of AES a document by NIST or someone else? Does Adobe define
application/pdf or ISO? (these are not questions I am asking, they are
illustration of the questions that IETF/IANA needs to answer).

I can certainly see potential for a situation where we start using some
open source content type and there is a fork where two different groups
claim to be the successor in interest and one true guardian of the spec.


For crypto, the situation is a little simpler and a lot more complicated
than assumed. The standards are in practice defined by running code. And
that code behaves in very particular ways.

If I add a crypto module to Windows (or most other well designed crypto
platforms), it will become available to all the applications running on
that platform. Which is all well and good of course. One consequence of
this is that the way S/MIME uses AES and the way TLS uses AES have to be
compatible. Otherwise large amounts of application code would need to be
rewritten to special case each algorithm. And not having to do that is the
objective of having a standard.

Fortunately the amount of variation possible is very small. There is the
byte ordering for keys and a few other endian issues that might be
underspecified or confused in the specs. But that is all. If an algorithm
needs more than that then it is not ready for use and should be discarded
(and new code points issued for the replacement).


What we do today is that we micromanage the process in ways that are
counterproductive. People write documents describing how to use the
<splunge> algorithm with IPSEC and S/MIME but any given platform is going
to have code based on one or the other. What appear to be two independent
specifications are in fact two variants of one specification since any
deviation between the two documents is unambiguously an ERROR.


-- 
Website: http://hallambaker.com/