Daniel A. Nagy wrote:
I think that SHA1 for key identification and fingerprinting is still a
comfortable overkill.
It would seem that most efforts in this direction
are divisible into two classes - those with human
interaction and those without.
For the latter, it would seem reasonable to always
use full strength - the full bits available for
all computations. Overkill is not a problem for
computers.
For human interaction issues, it is clearly tricky
in some cases that are hard to predict to use a
full length identifier. So the notion could be that
each usage of a hash presented to humans simply
defines its own minimum portion of bits, but also
can and will accept a longer or even full length
hash without complaint.
(In effect that is what is done but the minimums
seem to be more hard coded and larger portions
have no acceptibility.)
From this pov, we should
* use the best hash available (e.g. SHA512, excessively)
* convert to the display form as late as possible in the code
* make sure that the "short form" convention is understood
by all inputs
* and recognisable/parsable in its text form.
(Something like a dotted notation: DEAD.BEEF would
give us the 32 bits, DEAD.BEEF.C001.D00D would be
64 bits. These are relatively easy to parse, easier
than the whitespace separated form.)
iang
PS: I'm not recommending SHA512 here, just conceptualising
the software engineering principles involved.