Jeff,
I was the person who pushed for the use of ASN.1 in Kerberos
version 5. I had this disease at the time that made me think that ASN.1 was
a good idea. I got better, unfortunately we have been living with the
results of my braino for quite some time now... poor Ted.
Your choice to use ASN.1 was an excellent one. The problem that you
encountered is one of the tools you used not being adequate to do the
job.
However, the problem with ASN.1 isn't its waste of space (which actually
isn't that bad for a mechanism for encoding arbitrary objects).
You are mistaken. ASN.1 describes the structure of data to be
exchanged between peers, it is silent on space usage. For example, it
might say that the first field in a message is an integer, but it does
not say how many bytes are used to encode that integer. That is the
job of the encoding rules, such as BER, DER, PER, etc. For example,
though the following is specified in ASN.1
MyInt ::= INTEGER (123456789 .. 123456792)
ix MyInt ::= 123456789
the value ix is encoded using DER in 6 bytes, while in PER it is
encoded in 2 bits (yes, bits). The point is that ASN.1 has nothing to
do with how data is encoded or space usage, it is strictly used to
describe the fields in messages in a way that allows interoperability
regardless of the machine, OS or programming language in use.
The problem
is that it is the product of a standards making process that didn't (and
doesn't) value interoperability. Adherence to the ISO specifications does
not guarantee interoperation. Instead regional "workshops" negotiate
aspects of implementations to obtain interoperation.
Wrong. The regional workshops exist because standards such as DER
choose to not impose certain limitations and purposefully leave it up
to various user communities (workshops) to determine those limits.
For example, DER allows INTEGER values to be hundreds of bytes long,
while a community of users may know that they will never use integer
values more than 8 bytes long, and thus impose such a restriction so
implementors know what to expect. In the scientific community, on the
other hand, they may decide that they need to user integer values with
far greater ranges, and impose a different set of limitations for
applications that exchange data in that community.
The benefit of this approach is obvious - by allowing communities of
users to restrict the usage to limits they find practical the standards
can be utilized by a wide variety of users, without each making adhoc
extensions.
What does this mean for ASN.1? It means that the definition of ASN.1 is a
bit abstract (as its name implies). Problems result when two organizations
(say MIT and OSF!) attempt to implement from the specification in ASN.1 but
use different ASN.1 compilers and things then don't work. Arguments then
ensue about whose compiler (or manually written parsing code) is "correct"
in terms of doing the right thing with ASN.1.
This can occur with any two communicating applications, independent of
whether they use ASN.1/DER or not. That is, one or the other (or
both) can be mis-implemented. That is a problem of sloppy
implementations and is no reflection on ASN.1, ASN.1 encoding rules or
ASN.1 compilers. As far as algorithms for transforming data goes, DER
is relatively simple, so the problem is again one of poor implementations.
This is particularly so when
using DER (for Distinquished Encoding Rules) which is itself an
after-thought added to ASN.1 later in the process.
DER is *not* part of ASN.1. It is a set of rules used in encoding
data described using ASN.1; this is a big difference from being a part
of ASN.1. And it is no more an afterthought than SNMPv2 or Kerberos 5
are afterthoughts. On the contrary, DER is a testament to the
usefulness and flexibility of ASN.1, which allows an assortment of
encoding rules. DER was created because a community of users who used
BER decided to restrict the set of valid BER encodings to work more
smoothly in applications that do authentication. (All DER encodings
are valid BER encodings, but not visa versa). In the end DER was found
to be so useful that it was adopted as a formal encoding rule of ASN.1
(and is published as such - in a document separate from the ASN.1 doc).
It is required in order
to verify digital signatures (which have to be computed on the "encoded"
form of an object because there is no good way to calculate a signature on
an "abstract" object).
If the Kerberos specification said: "pub this byte here and that one there"
none of these arguments and problems would happen.
Even if the Kerberos spec said as you suggest above I do not see a
good way to calculate a signature on the "abstract" object. For example,
if the data contains an integer you would have to ensure that the bytes
that form it are are in a pre-defined order before calculating the
signature. This means that you have altered the "abstract" value. Further,
any simplistic "put this byte here and that one there" scheme either results
in signatures not being calculated on the abstract object or they require
severe restrictions on implementations concerning how they locally represent
data that are difficult or impossible to meet.
Can you spell out in greater detail what you propose as an alternate to
ASN.1 and its encoding rules, other than that standards should say "put
this byte here and that one there"?
/pivnic
--
pivnic(_at_)norden1(_dot_)com