Part -- but not all -- of the problems with X.500 is the use of ASN.1
We've found the of ASN.1 to be problematical. It solves the problem of
passing structured objects around the net at the expense of great
complexity, difficulty in debugging, and, most importantly, difficulty in
reaching closure on what the precise specifications are. (For example, I
have had to wade through lengthy discussions about why the order has to be
retained in the *presentation* of sets although the *searching* and
*matching* rules demand independence of order. Similar confusion applies
to the encoding rules and the "syntax" rules.)
I'm not one to fall down and worship a standard just because it is a standard,
most especially if it doesn't work. But I am very reluctant to throw away
international compatibility and extensibility, especially in an area as dynamic
as digital signature certification, just for the sake of expediency or some
minor efficiency in implementation.
Although I am sympathetic to the problems of trying to understand ASN.1, and
espcially the BER and DER encoding rules, it should be obvious that although
searching and matching operations should be independent of order when searching
the directory, some consistent order and fixed set of encoding rules have to be
applied if a digital signature is going to give predictable results. I believe
these are basically one-time, table-stakes kinds of problems for an
implementor. There are now compilers available which simplify the task.
The primary reason for using ASN.1, to my way of thinking, is to have the
flexibility of adding strongly typed attributes that are uniquely identifiable
as to their syntax (and hopefully semantics, although that is an area where the
existing standards often fall short), even if the attribute is defined by and
known to an organization that is well below the root in the OSI naming
hierarchy. The problem of publishing the definition of those attributes is
addressed (partially) in the 1993 X.500, and hopefully this will eventually be
a considerable aid to interoperability.
There obviously has to be a means of sending a structure that survives the
perils of mail transport, but it's not really all that hard to do. The
main issue is the treatment of white space, and for text objects it's
arguable that one really doesn't *want* to distinguish "foo<tab>bar" from
"foo<sp><sp>bar". Thus, for signature purposes, these should be treated as
the same. Fairly simple rules suffice for mapping all whitespace into a
common format.
I would argue that the encapsulation of a certificate into base64 or some other
representation so that it survives mail transport is completely independent of
the issue of ASN.1 encoding.
With regard to the treatment of text objects, I agree that it is relatively
simple to define encoding rules for mapping whitespace into a common format,
although opinions may differ as to how common that format is across different
applications. If all that a certificate would ever have to contain is a simple
name, a key, and the results of a digital signature, the problem would be
simple. But to this day we don't have a concensus as to how the user's
name/identity should be contained in the certificate, or even what all of the
attribute types should be. But I know that I don't necessarily want to be
constrained to pure text objects. In particular, I don't want to have to parse
a string of name/identification characters and figure out whether this is an
organizational name, a residential user's address, or an e-mail address. I want
a strongly typed attribute to tell me that, quite specifically, even if each of
those attributes uses exactly the same underlying text encoding rules.
Using this approach, it's easy to treat certificates as text objects as
opposed to binary structures. Binary information such as keys and
signatures, can be encoded into a textual form easily and
straightforwardly. The computational cost of encoding binary objects into
text is not worse than the computational cost of packing and unpacking
ASN.1.
We used this approach in designing our messages for CyberCash, and it
drastically reduced the complexity of programming and debugging handling of
messages. It also promoted much more rapid convergence of a working
system.
Steve
As I said, I'm sympathetic to your development problems, but I believe those
are basically one-time growing pains. I could be convinced more easily if
someone would provide detailed information about the number of bytes of code
necessary to interpret the ASN.1 vs. some other approach, and the amount of
processing time required to decode a certificate compared to the digital
signature verification time. If it is more than 10% I would be quite surprised,
and if it is less than 10% I don't think the processing overhead would be
worth worrying about, given the other advantages. In particular, considering
the more complex parsing that might otherwise be required, together with the
extensibility issue, this may be a question of pay me now or pay me later.
Bob
--------------------------------
"Robert R. Jueneman" <Jueneman(_at_)gte(_dot_)com>
Staff Scientist, Wireless and Secure Systems Laboratory
GTE Laboratories, 40 Sylvan Road, Waltham, MA 02254
Waltham office: Voice: 1-617-466-2820, FAX: 1-617-466-2603
Telecommuting: Voice: 1-508-264-0485, FAX: 1-508-264-4165