Jeff,
Thanks for the response.
FYI, in case you missed it, Steve Kent and I at one time discussed and
actually agreed (!) on the potential utility of the following type of mechanism
to control root keys and the dynamics of trust associated with different
PCAs, CAs, and users:
Some mechanism is necessary to authenticate the root key, and probably
the code itself. Probably the best such mechanism is to have the user's
own public key be the root key of all root keys, and to retrieve the user's
public key and perhaps some type of bootstrap code from a physically protected
device such as a write-protected floppy that is normally stored in a secure
location, at least until tamper-proof smart cards or some other technology
becomes more widely available.
Obviously the user trusts himself, so he can and should be able to sign
anything he wishes, including the certificate of another user that is
communicated via a PGP-like direct trust model. (This is making the
explicit assumption that the user remains directly aware of that other
user's employment status, any compromises, etc. This model would
apply to family members and perhaps to members of one's immediate
department, where the user is in at least weekly if not daily contact.)
(The "user" in this case might also be the MIS administrator in a more
tightly controlled corporate environment, without doing any particular
violence to the concept.)
Likewise, there may be specific applications that require that one and only
one CA be trusted with respect to a certain privilege, regardless of whether
they are part of a larger certificate hierarchy. For example, if I were to use
a digital signature mechanism to open the doors to the plant or check books
out of the corporate library, I might reasonably insist that only those users
who are certified by the CA of my company be allowed this privilege.
Encrypted communication of company proprietary information might be
one such example.
Finally, depending on the particular application I have in mind, I might
accept only those users who are certified under one PCA for certain
purposes, e.g., financial transactions, whereas at other times, e.g., when
I am reading the Internet news feed, I might accept almost anyone.
On the other hand, I may decide that one particular individual is a
pathological liar and not to be trusted, regardless of his certification
status. I might also decide that a particular CA is untrustworthy, perhaps
because they are my competitors. And one or more entire PCA hierarchies
may not be acceptable for certain purposes.
I would therefore like to see a wildcard type of accept/reject mechanism
applied to PCAs, CA, and individual users, where the default action
or warning level could be specified aslong the lines you have discussed.
You have added even more granularity than I had thought of, but I think
you are on the right track.
Some people may be thinking that this level of control is not needed for
what is basically an e-mail type of system, and therefore is outside the scope
of PEM. I would argue the reverse, that PEM is quite capable of being
implemented within the scope of an automatic e-mail responder system,
and that a substantial degree of hands-free automation is possible, with
human intervention only on an exceptional basis. Even if the system is
only used under human control, I agree with you that this type of
fine tuning is desirable just to ensure that the user is not flooded with
error messages that aren't particularly meaningful for his intended use.
Any comments as to the difficulty of implementing this type of
wildcard level of control over users, CAs, and PCAs within your
software?
Bob