[Top] [All Lists]

Re: [openpgp] Fingerprints and their collisions resistance

2013-01-03 23:36:07

On 01/03/2013 08:27 PM, Daniel Kahn Gillmor wrote:
On 01/03/2013 07:01 PM, Andrey Jivsov wrote:

Let's say I have a server that manages a domain of user, each have their
own key, one at a time. Users can update their keys. They cannot remove
keys (other than updating them). The server logs protocol actions and it
uses key fingerprints to log changed to keys. The server decide to log
the whole key on the key material change event, which it identifies by
the change in the key fingerprint. Seems like a reasonable and secure
system at first sight.

This sounds like a system that explicitly ignores the warning in RFC 4880:

    Note that there is a much
    smaller, but still non-zero, probability that two different keys have
    the same fingerprint.

Sure, shorter fingerprints are good for humans...

On the other hand, my reading of the above quote is that this is a general warning about the 1/2^80 probability of a collision.

SHA-1 collision at 1/2^51 probability is a different story.

I wouldn't assume that real-world OpenPGP systems today are written to handle collisions in 20 byte OpenPGP fingerprints. That would "never" happen in practice.

Furthermore, I think there are probably no systems that deal with
KeyID collisions either. KeyID collisions have a chance to happen at a rate of 1/2^32 -- readily observable. I am guessing that the problem is currently avoided by application design, which eliminates scenarios of large number of messages signed/encrypted by different users, all in one pile. Usually a metadata like UserIDs, SMTP headers, etc, is used first to filter messages/locate keys.

It seems to me that instead of fixing the software to handle fingerprint collisions, I would consider diverting this effort to introduce the new fingerprint instead.

The purpose of a fingerprint is for one human to be able to securely
verify the key of another peer.  This means that they must be short
enough for humans to understand them, and they must resist preimage
attacks (no one should be able to come up with a new key that matches an
arbitrary fingerprint).

RFC4880 also specifies how fingerprints are stored to identify revokers, and they are a truncated to produce keyIDs. The point is that it's nice to have a single general-purpose method without flaws.

If your argument is "existing tools misuse fingerprints (and even
keyids) and treat them as unique identifiers when they should not" then
i'd have to agree with you.  We need to fix those tools.  If the
argument is "fingerprints need to be resistant to collision attacks, not
just preimage attacks because we want to be able to use them as unique
identifiers in cases like this that would allow for repudiation of
previously-signed messages where the earlier key was not properly
stored", then you're effectively doubling the length of the required
fingerprint for the sake of some problem better solved another way.  i
think that would defeat (or at least severely damage) the ability for
human beings to actually cognitively process the fingerprint.

I am saying that let's pack as much security bits into fingerprints as the current state of the art allows. 256 bit fingerprint should mean at least 128 bit security under any scenario, regardless of higher-level protocols. This adds clarity to security properties of the system and gets rid of (almost) non-compliant use of SHA-1.

It's arguably too difficult already for humans to accurately compare or
transcribe 160 bits of high-entropy data.  Asking them to compare or
transcribe larger fingerprints seems likely to result in operator error.

I will guess that humans are unlikely to compares full fingerprints anyway. They start from (hopefully :-)) the same end of hex digits and stop somewhere along the way. If fingerprints to become 256 bit long, this may turn out to be unnoticeable.


openpgp mailing list