On 01/03/2013 07:01 PM, Andrey Jivsov wrote:
Let's say I have a server that manages a domain of user, each have their
own key, one at a time. Users can update their keys. They cannot remove
keys (other than updating them). The server logs protocol actions and it
uses key fingerprints to log changed to keys. The server decide to log
the whole key on the key material change event, which it identifies by
the change in the key fingerprint. Seems like a reasonable and secure
system at first sight.
This sounds like a system that explicitly ignores the warning in RFC 4880:
Note that there is a much
smaller, but still non-zero, probability that two different keys have
the same fingerprint.
The purpose of a fingerprint is for one human to be able to securely
verify the key of another peer. This means that they must be short
enough for humans to understand them, and they must resist preimage
attacks (no one should be able to come up with a new key that matches an
If your argument is "existing tools misuse fingerprints (and even
keyids) and treat them as unique identifiers when they should not" then
i'd have to agree with you. We need to fix those tools. If the
argument is "fingerprints need to be resistant to collision attacks, not
just preimage attacks because we want to be able to use them as unique
identifiers in cases like this that would allow for repudiation of
previously-signed messages where the earlier key was not properly
stored", then you're effectively doubling the length of the required
fingerprint for the sake of some problem better solved another way. i
think that would defeat (or at least severely damage) the ability for
human beings to actually cognitively process the fingerprint.
It's arguably too difficult already for humans to accurately compare or
transcribe 160 bits of high-entropy data. Asking them to compare or
transcribe larger fingerprints seems likely to result in operator error.
Description: OpenPGP digital signature
openpgp mailing list