ietf-openpgp
[Top] [All Lists]

Re: [openpgp] Fingerprints and their collisions resistance

2013-01-07 02:30:50
Part of the problem here is that claiming uniqueness to a cryptographic level is harder to do than say. Typically if we are dealing with larger keys and much more aggressive attackers, we'd want to up the size of the hash to larger SHA2/3s. Which means that ordinary users with their human comparison function are going to be left behind. Time factors also make us too conservative.

For about 10-15 years we had relative peace with this question because we had a strong single SHA1 that survived well. That is unusual in the history of cryptographic computing, and we can see the reflection of this low expectation in later algorithms - AES, SHA2 and SHA3 all offer a range of strengths. So perhaps we have been lulled into a sense that we can do two or more not-so-complementary things with one algorithm.

I cannot see how we can totally mix the unique programmatic identifier with the fingerprint function with a single fixed hash. Basically, humans can be relied upon to never go beyond the 160 bit length of SHA1, and even the dedicated struggle to get that on their business card. 80 bits is far more sensible.

So in this sense, we seem to be facing two possible directions: Stick with SHA1, and have the fingerprint for people only. If programs want more, it is up to them to extract out the key and generate their own unique identifier. I do this all the time in my code, and it isn't a bother. There are benefits in doing the right thing up front.

Alternatively, if we combine these roles, what we might need then is a more agile *length* but one fixed algorithm. Right now we have KeyID, Fingerprint and something else I forget - 2 or 3 lengths extracted from the same algorithm. To pursue that, we could simply stick in the longest biggest baddest Keccak and write up a process for truncation, for different lengths. Then match that up to the key sizes.

(Or, option 3 - as Jon points out, ECC is looking good, and just use the key as is. That has many advantages.)



iang



On 7/01/13 10:20 AM, Nicholas Cole wrote:



On Thu, Jan 3, 2013 at 10:54 PM, Werner Koch <wk(_at_)gnupg(_dot_)org
<mailto:wk(_at_)gnupg(_dot_)org>> wrote:

    On Thu,  3 Jan 2013 20:06, openpgp(_at_)brainhub(_dot_)org
    <mailto:openpgp(_at_)brainhub(_dot_)org> said:


     > export/import control of encryption). Fingerptins are special data
     > structures because they are sometimes input by humans.

    Well, humans compare fingerprints but don't enter them.  I doubt that I
    ever did this in the last 20 years.


Yes.  And it is also important that there is a way to 'uniquely'
(granted the *very* small chance of a collision - I think there has been
only one possible collision with SHA-1 fingerprints reported on the
gnupg list) identify keys to other programs.  I suspect that a lot of
programs using gnupg and other implementations expect the fingerprint to
be unique.  There does have to be a reliable way to refer to a
particular key.

So fingerprints are compared by humans, but they are also important for
computers too - and probably used more by computers than by humans.  I
don't see the sense in adopting a truncated standard.  Any new
fingerprint is going to be more tedious than comparing SHA-1, but that's
the price to be paid for security.

I suppose that humans will start relying more on the key-id.  I assume
that any new standard would adopt a more collision-resistant key-id.

N.




_______________________________________________
openpgp mailing list
openpgp(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/openpgp


_______________________________________________
openpgp mailing list
openpgp(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/openpgp