Sorry about the delay in posting this response to a prior discussion.
I think these issues are still relevant.
DATE: Fri, 24 Sep 93 11:51:55 +0100
FROM: Stephen D Crocker <crocker(_at_)tis(_dot_)com>
Peter, et al.,
Well, perhaps it's time to deal with the encrypting vs signing problem
more directly. I've come to believe that it's probably better to have
separate keys and certificates for these two functions. The happy
accident that an RSA key-pair can be used for both functions now feels
like a distraction instead of an important feature.
As a worst-case scenario, what if the "Government" one day ordered the
escrow of all keys capable of encryption? Reading my messages is one
thing, but forging them is another, and no one has much confidence in
the government escrow scheme. Once the key comes out of the smart card
and is subject to handling, all bets are off. Hence, an escrowed key
would be questionable in any non-repudiation system, much less as the
root of a major hierarchy.
I'd like to propose that we shift our abstraction towards separating
these two functions. As a first cut at defining the concept, let me
describe this as meaning that an ordinary user who currently has a
single certificate would need two certificates, one which conveys his
public key corresponding to the key he uses for signing, and one
conveying the public key one should use if you want to encrypt mail
for him.
Continuing the discussion of the concept, this can be implemented by
extending the certificate hierarchy in a simple way. The currently
defined certificate hierarchy applies to signature keys. We then
postulate that each user issues himself an encryption certificate,
which he signs with his signature key. Whether the two keys are the
same or not is the user's private business.
How far is this from the way things are set up now? First, we should
separate the two forms of certificates somehow. Perhaps it's as
simple as choosing algorithm identifiers for RSA-signature and
RSA-encryption. If this is agreeable, then we can further agree that
the current certificates can be viewed as abbreviations for a pair of
certificates, one of each kind. It's relatively simple to adjust the
existing PEM implementations (and other certificate-handling systems)
to conform with this view.
Believe it or not, a much sounder approach would be to use NIST's DSS for
signatures and a version of certified Diffie-Hellman for privacy. While
DSS' development process left something to be desired, it does appear to
support a functional separation between non-repudiation and privacy (if
one ignores reports that subliminal information can be transferred after
all). Also, if PKP handles DSS licensing, they'll get paid either way,
so it should be a matter of indifference to them.
With respect to the secretary vs boss issue of access to decryption vs
access to signature, this can be handled as a local matter. If the
access to the private key is distinguished as to the purpose, then
separate access controls could be implemented. For example, if the
user is required to give a password before gaining access to the
decryption or signing functions, perhaps there could be two passwords,
one for each function.
I agree with later commenters (on this thread) that using local host
security to control access to each private key is unworkable, pushing us
back to older trusted computing paradigms, when we would be much better
off handling everything through the certificate management architecture.
Under a DSS + D-H system, when the boss goes on vacation, he gives his
secretary the private key to his D-H certificate, but retains control of
the private key to his DSS certificate. If it's compromised he revokes
and gets a new one, without having it turn into a near-death experience.
For those who haven't given it extensive thought, all that is required
for certified D-H is to place { p, a, DH1 } in the subject-public-key-
info field of an otherwise normal certificate, bound to the recipient's
identity. The sender then selects his own secret r, generates DH2,
generates the session key (Kdh), encrypts the message, and pastes DH2
on the front of it. If the values are long enough it's highly secure.
(Since DH1 is certified, an active interloper cannot spoof both sides,
as can happen with free-form D-H.)
Work is required to develop a D-H certificate standard, plus some new
algorithm IDs, but ideology aside, this does solve the _communications
engineering_ problem of separating these two critical functions. (We
will no doubt bat this around for X9.30 Part 4 "Key Management" at our
meeting next week.)
Labeling one RSA certificate "privacy only" and another "signing only"
relies too much on everyone's goodwill, and cannot be enforced.
The requirements for privacy and non-repudiation are simply different.
Signatures must be verifiable and "provable" for years, while privacy
does not have the same flavor. Multiple people (or processes) may
be involved in handling private mail, and corporate security will want
internal escrow. The rigorous procedures needed for an auditable non-
repudiation system will make it nearly impossible to handle privacy in
the more flexible manner it requires, a tension that can only increase.
It feels strange to suggest this, since I have publicly criticized DSS
for the compatibility problems it will cause in a world dominated by
RSA, and I'm sure my friends at RSA will be ambivalent (at best) about
it as well. Still, when you think seriously about trying to "control"
these processes (as _you will_ when auditors start asking you a lot of
questions), the need for separation becomes evident, and the "truth =
beauty" symmetry of RSA does indeed start to look like a problem.
Frank Sudia
Bankers Trust Company
New York, NY