pem-dev
[Top] [All Lists]

Re: RIPEM details

1995-01-13 12:01:00
Derek, this is really interesting, and potentially very promising. A couple of
questions:

Yes, PGP's certificates aren't compatible with X.509.  This just means
that signatures on certificates won't work across systems.

Could you summarize the differences in the certificate formats, and more
importantly, indicate why PGP elected not to use X.509? More important yet,
could v3 of X.509 eliminate whatever problems existed? I am not concerned at
this point about differences in the message digest algorithm, the signature
algorithm, or the encryption algorithm(s).

PGP does canonicalize the text before processing.  In particular, it
does CR/LF canonicalization before making a signature.  I am fairly
confident that without much work a PGP signature and a PEM signature
could be cryptographically equivalent (if they aren't already -- I've
never checked).

That's good news. It means that at a minimum a trusted gateway could be used to
translate between the two different formats, at least syntactically. Since the
semantics of the trust models are different, that's a horse of a different
color.

As for the certification mechanisms, think of PGP as a user-specified
weighted transitivity principle.  Trust is only transitive through a
user-specified weight, and this weight can have a select set of
values.  PGP then adds up all the weights of all the signators on a
certificate to see if it reaches some user-specified threshold.

X.500 and X.509 allows different certification models, including arbitrary mesh
structures, although no one has so far put together a comprehensive overview of
the different approaches that I am aware of.

PEM focusses on validating a users _identity_. That's fine, and useful, but it
doesn't address the use of trustworthiness in any sense. If a properly
identified user lies or reneges on a promise, your only recourse is a legal
one. (Or extra-legal -- I guess you could break his kneecaps, banish him from
the kingdom (put him in your kill file), or make a pariah of him in
cyberspace.)

PGP, on the other hand, tries to approach the issue of trustworthiness, but it
does so without respect to any enunciated criteria or policy, so far as I know.
What does it mean to say that someone is trusted? Will he pay you back the five
bucks he borrowed? Will he keep a secret? Will he always tell the truth, even
if it is embarrassing or expensive to him? How much money would have to be put
on the table to overcome those scruples? In this area, I'd much rather have
multiple people telliing me that someone is a good guy, although what I really
want to know is whether someone else thinks that he is a bad guy. I don't care
whether the five village idiots endorse each other. Do the signators of a
certificate themselves have to be previously endorsed as good guys? (Suppose
that the members of the mafia all endorse each other, and so do all of the
agents fo the FBI. does that mean that the mafia should trust an FBI agent or
vice versa?) Maybe PGP ought to consider the concept of negative weights (black
balls).

Finally, neither PEM nor PGP to date are addressing the issue of capability or
authority. Is this person a corporate executive ,and permitted to commit the
company? Is he (or she) a competent automobile drive, heart surgeon, pilot, or
nuclear reactor engineer? Thesse kinds of things can be certified by a
certification authority, and are one of the primary reasons why I am so intent
on getting v3 of X.509 implemented, so that we can begin to experiment with
these concepts.

If PEM (I include RIPEM, TIS/PEM, et. al. in this) and PGP can agree
on an encryption mechanism (it looks like 3DES might be a possible
choice) then it is theoretically possible to make PEM and PGP message
compatible (although not necessarily certificate compatible).

I'm not particularly concerned about the different encryption mechanisms, any
more than I am about setting minimum and maximum key lengths in the spec. I
assume that as different schemes begin to mature, these systems will adapt
accordingly. Like I said before, whether you use MD2, MD5, or SHF for hashing,
or whether you use RSA or DSS for signing, or RSA or Diffie-Hellman, etc.,  for
key distribution, or DES or 3DES or SKIPJACK for encryption really doesn't
matter very much -- those subroutines aren't that big, and aren't that hard to
write. Ultimately, these algorithms should be implemented in tamper-proof
hardware (smart disks, smart cards, or smart tokens -- pick your form factor
and interface technology), and then it _really_ won't matter.

I think this is probably a good long-term goal!

I think that the most important point is that we need to properly layer the
specifications. RFC1421 laid out the basic message boundaries and
canonicalization requirements, and the security-multipart MIME work is
extending that, Ultimately, we may have a completely general object
encapsulation and labelling scheme, that would enforce access control through
cryptography and provide integrity controls through digital signatures plus
what I call the provenance of the object -- where did it come from, how was it
created, what is it certified to do, etc. All of these functions are
essentially at the same layer.

The next layer describes the encryption, message digest, and signature
algorithms and conventions, but those should be completely independent of the
object encapsulation schemes and may vary considerably due to export
requirment,s classified vs. unclassified, etc. The basic certificate format
should probably be included here as well, although if we could come up with a
compatible approach it could go in a common document or appendix.

The third layer should describe the certification and validation syntax (the
easy part) and the _semantics_ (the hard part -- what does trust mean, what are
the policies, etc.) An extension to the third layer should address the even
harder issues of nonrepudiation of origin, nonrepudiation of delivery, etc.

The fourth layer should describe the key/certificate/CRL distribution
architecture and protocol(s), whether push/pull, etc.

Does that seem like a reasonable framework for moving ahead on an approach to
achieve some greater commonality and interoperability?


Bob


--------------------------------
Robert R. Jueneman
GTE Laboratories
40 Sylvan Road
Waltham, MA 02254
FAX: 1-617-466-2603 
Voice: 1-617-466-2820


<Prev in Thread] Current Thread [Next in Thread>