Sorry for the mailing list bomb, but I wanted to get a few more thoughts on the
list while they’re still fresh. Moving back to the question of what the
“fingerprint scheme” itself might look like (the function that takes an
octet-string and produces an ASCII-string), which may not really be
OpenPGP-specific…
One of the potential goals that’s been discussed is to give users some form of
“fingerprint-mining protection”, or defense against attackers mining for keys
with similar-looking fingerprints. This could potentially be provided either
by playing with “what gets fingerprinted” (the topic of the E-mail I just
posted), or by playing with the fingerprint scheme itself (the topic of this
E-mail).
Here’s one specific idea for a fingerprint scheme with configurable mining
protection, which builds on ideas suggested earlier by Christian Huitema and
Phillip Hallam-Baker and others:
* Key creation: OpenPGP implementation first picks a hardness parameter H
determining level of mining protection. Implementations could provide a
reasonable default for this, while allowing power-users to tweak it if they
want. Iterate the following loop until successful:
1. Generate a public/private key-pair. Call this public-key K.
2. Take a hash of K and some domain-separation context string, yielding
a “nonce” N.
3. Use nonce N as the nonce input in an Argon2 proof-of-work, using
otherwise fixed, “reasonable” Argon2 configuration parameters.
3. Compute a SHA-512 hash based on K and this Argon2 proof-of-work;
this is the “pre-fingerprint”.
4. Unless the resulting pre-fingerprint has exactly H leading
bits/bytes, go back to step 1, retrying with a fresh keypair.
5. Form the fingerprint to present to the user as a one-digit encoding
of H followed by an encoding of the last 256 bits of the pre-fingerprint.
* Fingerprint verification/recomputation on PGP key import: take public-key K,
hash it as above, compute Argon2 nonce N, run a single Argon2 PoW round, verify
the resulting SHA-512 against the proposed fingerprint. Compute correct
resulting fingerprint based on the number of leading zero-bits found in the
hash and the last 256 bits of the SHA512 output.
The potentially beneficial properties of this approach are:
- Once the public/private key pair is created, the fingerprint depends on the
public key and nothing else: i.e., it is “key-canonical”, as per the preference
DKG expressed, keeping things simple.
- Users (and OpenPGP implementations) can configure the hardness parameter, in
particular gradually increasing it over time, so that the difficulty of
fingerprint-mining attacks for newly-created keys can keep pace with the
increasing computational abilities of attackers. Average users shouldn’t
necessarily be expected to see or be aware of this at all; they just get keys
with stronger fingerprints when they use newer software/machines to generate
them.
- The use of an Argon2 PoW step in the “inner loop” makes it more difficult for
attackers to do fingerprint-mining just by being very good at computing SHAs -
e.g., as PHB mentioned, by using a big bank of GPUs or a bunch of repurposed
Bitcoin mining hardware. We don’t want to make all fingerprint-verifiers solve
a “big” Argon2 PoW (because the receiver might be a smartphone with limited
memory for example), but even just including a small/moderate Argon2 PoW in the
key-creation mining loop might reduce a typical attacker’s advantage
considerably.
Just as an aside, this hybrid between Argon2 and Bitcoin-style mining is really
just kind of a “poor-man’s” solution to what we really want, which is an
easily-verifiable “slow hash” as Arjen Lenstra’s group proposed in this paper:
https://eprint.iacr.org/2015/366 <https://eprint.iacr.org/2015/366>
Cheers
Bryan
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________
openpgp mailing list
openpgp(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/openpgp