In the OpenPGP meeting, Christian brought up an idea that I thought was
interesting and perhaps deserved further consideration.
Background summary: In the meeting a hum-poll was taken on the “single vs
multiple fingerprints” question, my interpretation of whose result was that we
should *not* create a system that subjects users to juggling multiple,
inconsistent fingerprints for the same key (e.g., both a SHA-1 and a
newer-hash-function-based fingerprint for the same user’s public key). A
“strong interpretation” of that position is we should pick a single new hash
function for “new fingerprints”, and mandate that all keys created with “new
signature schemes” (e.g., Ed448) have fingerprints computed using that new
scheme, while leaving the fingerprints of old schemes (e.g., RSA/DSA keypairs)
fingerprinted using the old approach to preserve consistency. A “weaker
interpretation” of that position would be that for each new signature scheme
defined for use with OpenPGP, that scheme should also define a suitable
fingerprinting scheme to go along with it, but the fingerprinting scheme may
(in principle) vary from one “new” keypair-type to another provided it remains
consistent for any given keypair. I see that the precise results of this
hum-poll weren’t precisely captured in Rich’s meeting minutes - understandable
since the precise results (or their proper interpretation) was a bit fuzzy to
me as well and I don’t feel confident either to suggest exactly how that part
of the minutes should be filled in.
However, later in the meeting Christian brought up an interesting possible
approach to strengthening key-fingerprints against prefix-attacks. For
example, Eve wants to impersonate Alice, sees that Alice’s fingerprint is
0x0413ecf2fd5e…, assumes (likely correctly) that most users are never going to
eyeball-check more than the first 8-10 digits or so, and simply invests some
CPU time into hunting for a public key whose fingerprint starts with (say)
0x0413ecf2fd. This is of course the essential weakness that the availability
of “vanity fingerprints” leads to: if you can feasibly obtain a key with that
vanity prefix, then likely so can someone else who wants to impersonate you.
My understanding of Christian’s idea (please correct me if anything is
inaccurate) is that we include some form of adjustable proof-of-work (PoW) in
the fingerprint creation process. For example, the first digit of any
fingerprint might indicate the general scheme (as suggested elsewhere already),
and the second digit of the fingerprint might indicate the “hardness parameter”
for the scheme’s proof-of-work. If I’m generating a new key-pair for myself,
my OpenPGP implementation might either allow me to choose the hardness
parameter, or simply make some “reasonable” choice on my behalf, such as the
strongest PoW that completes in whatever amount of time the user is willing to
wait for key generation on his/her device. I envision a “nice” UI embodiment
of this might be for the key-generator to display, “Strengthening key
fingerprint - currently at strength level # - click OK at any time to stop and
accept this strength level” - where # gradually increases as the key-generator
solves progressively tougher PoWs.
Once this fingerprint-generation process completes, the OpenPGP key-generator
then includes the PoW-protected fingerprint in the keypair’s self-signed public
key record. Anyone can quickly and easily verify such a PoW-protected
fingerprint against both the public-key it is associated with and the
hardness-factor it is configured with (the second digit of the fingerprint).
But now if Eve wants to prefix-attack Alice’s public key by finding another key
whose PoW-protected fingerprint matches in the first (say) 10 digits, she must
compute on the order of B^8 proof-of-works of the same strength as Alice’s (so
the 2nd digit agrees) and using the same scheme (so the 1st digit agrees),
where B is whatever base the fingerprint scheme uses (e.g., 16, 32, 64).
This approach to fingerprint generation has the slightly-odd (certainly
unconventional) property that a single public/private keypair does not have
only one possible, deterministically-computed fingerprint (i.e., a hash of the
public key), but rather may in principle have many different possible
fingerprints (parameterized by the PoW and the salt that will inevitably be
required in that PoW). This might seem to violate the “fingerprint
consistency” property that was discussed at the meeting. However, as
summarized above, my perception is that the main fingerprint consistency
concern is that we do not want to subject users to multiple different
fingerprints *for a single key*. In Christian’s scheme, while any key could
“in principle” have many different fingerprints, as long as the user generating
the key (or the user’s OpenPGP implementation) picks one particular fingerprint
and binds that fingerprint to the key in a fully verifiable fashion as part of
the self-signed public-key record, the fact that fingerprint-generation is
parameterized creates no user-perceivable fingerprint-consistency issue that I
can discern.
So because of this, I have to say that I like the idea quite a lot in general,
and don’t see too many downsides other than a bit of (significant but I think
manageable) implementation complexity in key-generation and key-verification
for keypairs produced with new signing schemes.
There is of course the second-order question of “what type of PoW exactly”?
Here are some (slightly random and divergent) thoughts on this design space:
1. Just use a simple Bitcoin-like, hash-based PoW: i.e., if hardness is K, you
must hash the public key with different salts until you find one whose hash
starts with K zero-bits; then you use the remaining bits of the hash to form
the “actual” key fingerprint. I think this is more-or-less the PoW that
Christian was suggesting, and seems like a reasonable baseline design point
(and may well be the only one we should consider initially).
2. A “memory-hard” salted-hash scheme, such as the Argon2 scheme to be used for
passphrase hashing. Memory-hardness would be nice to achieve, but schemes like
Argon2 may not be directly realistic in this context, because password-hashing
schemes such as this by design take a lot of work both at creation *and*
verification time, and we probably don’t want to impose seconds-long delays on
(say) importing someone’s key into my keyring and verifying its consistency.
It might not be completely a non-starter provided those delays *only* occur
during key-import and not overtime I touch or use the key for any purpose, but
it would still be a downer. Are there “memory-hard-to-create, but
quick-to-verify” PoW schemes that might be worth considering?
3. Improving on Bitcoin-like PoWs in another direction, the recent “Random Zoo”
paper by Arjen Lenstra’s group at EPFL suggested a hard-to-generate,
easy-to-check PoW scheme that also tries to be *hard to parallelize*. In
principle it would be very nice if we could achieve such a hard-to-parallelize
property in protection against fingerprint-prefix attacks. However, it’s
unfortunately not as simple as just, say, adopting the modular-root-computation
primitive that the Random Zoo scheme uses for choosing bias-resistant random
numbers, because in the PoW-protected fingerprint application Eve would still
have available to her the “embarrassingly parallel” opportunity of running in
parallel all her (say) B^8 attempts to create a key whose PoW-protected
fingerprint looks like Alice’s. This suggests the question: is it possible to
limit an attacker’s (Eve’s) number of concurrent, parallel “attack threads”
using different keys? I don’t know, and it’s probably not easy, but perhaps
worth thinking about.
4. Leading in perhaps an even more far-fetched but interesting direction, is it
possible to ensure that Eve must incur some large *financial* cost in order to
prefix-attack Alice’s key successfully? As a straw-man approach, suppose the
work-factor encoded in the 2nd digit of such a fingerprint scheme encodes the
log2 of a number of Bitcoins, which the key-creator must “verifiably discard”
in order to validate the fingerprint. (It’s easy to create a Bitcoin
transaction that verifiably discards money: just sign the coins over to a
public key whose value is the output of a hash, to which w.h.p. no one is ever
going to be able to find a corresponding private key with which to spend the
coins.) Thus, Alice must obtain and verifiably discard a small amount of
Bitcoin - the amount of her choosing - when generating the key; then if Eve
wants to prefix-attack Alice’s key she must spend around B^8 times the amount
of Bitcoin that Alice spent in order to match the first 10 digits. The obvious
practical issue here is that both the key-generator and anyone importing and
checking Alice’s key needs to talk to the Bitcoin network to verify the
presence of the relevant coin-discarding transaction in the blockchain. Again
not necessarily a complete show-stopper in principle as long as this typically
only has to be done during key-generation and key-import, but still I’m not
sure how many OpenPGP implementors necessarily want to incorporate a Bitcoin
client into their OpenPGP implementations. :)
At any rate, independent of these varying possible approaches to fingerprint
PoWs, I feel like at least the first approach above that Christian suggested
(simple PoW) seems practical, offers a nice parametrizable strengthening
against prefix attacks, and doesn’t violate the essential consistency issue
that users should need to deal with only one fingerprint *per keypair*. And if
we were careful in specifying how the fingerprint-generation and
fingerprint-validation mechanism works, we could easily leave the door open to
different, further strengthened (and perhaps user-selectable) fingerprint
protection mechanisms later. Thoughts?
Thanks
Bryan
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
openpgp mailing list
openpgp(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/openpgp