ietf-openpgp
[Top] [All Lists]

Re: [openpgp] Fingerprints

2015-04-15 15:46:41
On Wed, Apr 15, 2015 at 4:04 PM, Christoph Anton Mitterer
<calestyo(_at_)scientia(_dot_)net> wrote:
On Wed, 2015-04-15 at 12:11 -0700, Jon Callas wrote:
There was a proposal that floated around that defined an extended
fingerprint to be an algorithm number followed by the actual bits.
For example, ASCII-fied 23:ABCDEF0123...FF. There's an obvious binary
representation. There's an obvious way to truncate that as well --
just decide if you truncate little-endian or big. (Personally, despite
being a little-endian bigot, this is a place where network byte order
is even to me the obvious win.)

The ni scheme I linked to does essentially that. What we are
discussing here is essentially the same thing only with a slightly
different syntax. It is not necessary to separate the algorithm ID
from the fingerprint.

The major advantage of this is that you can define it and then you
never have to change it again. We don't have to have any arguments
over what hash function is proper to use, etc. An implementation can
decide to support or not support whatever.
+1

But shouldn't one define better the number to be either a string?
Sure a one byte number with 255 possible future algorithms seem plenty
enough, but people also once thought that about 32bit IPv4 addresses,
two digit year numbers and so on.

It isn't necessary. We just use the same trick Ken Thompson used to create UTF8.

Let us say we choose the first 5 bits to be the algorithm identifier
giving us a maximum of 32 code points

Now let us say that we have allocated 16 code points. This means that
any fingerprint which begins with the byte 00000xxx - 01111xxx has to
use one of those algorithms.

So when we get to code point number 17, we need to expand the
registry. instead of using just the first Base32 character we might
use the whole of the first byte which would give us 128 extra code
points. Or we might go to the first 10 bits which gives us 512 code
points.


I do not think it is at all likely we will exhaust the registry in our
lifetime. Since Rivest proposed MD4 in 1990 we have had four hash
algorithms that have been widely used, MD4, MD5, SHA-1 and SHA-2. If
we continue at the same pace it will take a century before we get up
to 16. And that is actually a pretty conservative estimate. Since 1995
we have only had two algorithms.

If we start a registry now with SHA-2 and SHA-3 defined, I see no
reason to expect any need to assign a new code point for another 20
years. It is now 14 years since AES was published and nobody expects
it to be replaced any time soon. Some people, including myself, think
we should have a competition for a second, stronger algorithm for use
as a backup but that is an entirely different matter.

If we consume one code point every 20 years it will be 1600 years
before we need to worry about going beyond the first byte. And even if
we were trying to burn code points as quickly as possible I can't see
the gaps between the RFCs being less than 2 years. So thats still 160.


As with every other key size debate, people can always say we might
need more but there is a tradeoff. At present fingerprints need to be
short to fit in with our legacy paper based systems. While it is
unlikely that issue will disappear in the next ten years, I don't
expect it to be relevant in 30.

And yes, Bill Gates did suggest 640Kb as the limit for RAM in the IBM
PC. But that was not a mistake as some folk imagine. The chip had a
hard limit of 1024Kb of memory (it did not have paging) and that had
to include the RAM, the ROM and the video memory. Allowing more memory
for programs would have prevented the use of bitmapped graphics cards.
It was an engineering tradeoff.

So is this.

_______________________________________________
openpgp mailing list
openpgp(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/openpgp

<Prev in Thread] Current Thread [Next in Thread>