ietf-openpgp
[Top] [All Lists]

Re: PGP Keyserver Synchronization Protocol

1999-06-30 08:56:06
In 
<Pine(_dot_)GSO(_dot_)4(_dot_)02A(_dot_)9906240749020(_dot_)12077-100000(_at_)hardees(_dot_)rutgers(_dot_)edu>,
 on
06/24/99 
   at 08:03 AM, Tony Mione <mione(_at_)hardees(_dot_)Rutgers(_dot_)EDU> said:

-----BEGIN PGP SIGNED MESSAGE-----

On Wed, 23 Jun 1999, William H. Geiger III wrote:

In 
<Pine(_dot_)GSO(_dot_)4(_dot_)02A(_dot_)9906222239500(_dot_)23439-100000(_at_)hardees(_dot_)rutgers(_dot_)edu>,
 on
06/22/99 
   at 10:46 PM, Tony Mione <mione(_at_)hardees(_dot_)Rutgers(_dot_)EDU> said:

Overview:
...

I'm a bit confused here. List #2 contains all keys on the client that are
NOT on the server. How can the server send such a list? 

The server sends the entire hash list of all it's keys. The client, by
comparing the hash list it has generated locally, generates list #1, #2,
#3.


IF you meant list #3, how can you tell which keys from the server are
actually newer than the client keys. Just because they are different
does not mean that the server (rather than the requester) has the most
up-to-date copy of the key.

Yes, that text should have read: "all the keys in list #1 & #3"

There is no efficient way to determine who has the most up to date version
of a key. Hopefully, with periodic server synchronization, #3 should be a
small list.


This concerns me. It sounds non-deterministic. If that means that an
older copy of one of my keys may back-propagate to a server that has the
correct up-to-date copy, I would have a problem with this. I don't think
it really matters whether list 3 is large or small. You need to be
correct about which is the most recent copy of the key. This probably
requires additional thought (personally, I have always HATED dealing with
database update issues :)

This is not a problem because of the mechanism of how PGP handles keys.
Nothing is ever removed from a PGP key it is only added to. If a key in #3
is not the "newer" key it means that it has *less* packets than the key on
the client with the "newer" key. When this "older" key is merged into the
database nothing will change.

Example:

server #1 (server) has the following key:

KeyID  1111111111111111
UserID John Doe
Sig    John Doe

Hash   EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE


server #2 (client) has the following key:

KeyID 1111111111111111
UserID John Doe
Sig    John Doe
Sig    Jim Brown

Hash   FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF


Now when the client compares the 2 hashes he knows that the key he has and
the key the server has are different but he does not have anyway of
knowing *why* they are different until he downloads the key. So the client
does the following:

Download key from server #1
Merge key to cleint data base

After merging the key in the client database we have:

KeyID 1111111111111111
UserID John Doe
Sig    John Doe
Sig    Jim Brown

Hash   FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF


Nothing has changed on the client machine as no new packets for the key
were downloaded. The client can now assume that it has packets for the key
that the server does not and uploades the his copy of the key back to the
server.
 

It is also important to note that it is possible for *both* server &
client to have a "newer" key, ie each has packets on the key that the
other does not have. Take the following example:

John Doe signs key #1 and uploads it to server #1
Jim Brown signds key #1 and uploads it to server #2

Now server #1 & #2 both have packets for key #1 that the other does not
have.



generate a new hash table for his database and compair it to the hash
table he has from the server. He should now only have 2 lists:

  #1 All Keys on the client but not on the server
  #2 All Keys on the client & server but have different hashes

The client should then send the server all keys that are in the 2 lists.


Same comment about who really has the most up-to-date key.

Once the client has downloaded a key from the server and updated it's data
base, if the hashes are still different then the client must have the
"newer" data.


I'm being dense again, if the client has downloaded and updated it's
database, should not the hash now match (providing the hashing is done in
a defined order as you illustrate below?)

See my example above.

Calculating Hashes:

  It is important that when calculating the hash of a key that it be done
in a specific order as to guarentee that the identical key in two
different databases has the same hash. The following order of operation
for calculating the hash is:

  Primary Key
  SubKeys Sorted by KeyID
  Signatures Sorted by Signing KeyID
  UserID's Sorted Alphabeticaly

PhotoID's & X509 signatures are 2 issues that need to be looked into.
Unfortunatally the specs for neither of these packets have been released
by NAI. For right now I recomend that any propritary packets *not* be used
in the key hash calculations.


Does it really matter if you do not know the internal packet format as
long as you know where the packet ends? Hashing is simply mixing together
a stream of octets and so I do not believe the 'format' makes much of a
difference.

Well the problem of unknown and/or undocumented packet formats is that of
sorting. For two different systems to come up with the same hash for an
identical key, they must calculate the hash in the same manner. I have
outlined above how to do this with known OpenPGP packets. It is difficult
to set up similar procedures for packets who's format are unknown.


Sigh...
      Yeah, I see that. However, I agree it should be looked into. I would
like to know that adding a proprietary extension to a key block will
trigger an update at the next synchronization. If these are not included
in the hash, they keyblock will never propagate unless another (hashed)
change is made.

Well this is the problem of proprietary extensions. Not that I have a
problem with different vendors adding things that are particular for their
needs but they need to document them if they expect others to be able to
process them. I think that both the X509 sigs & the PhotoID's can be
accommodated without too much fuss, I have not had the time to dig into
their formats yet. In any case the system need to have a mechanism of what
to do if it encounters packets it does not understand. IMHO the best thing
to do is to ignore the packets in the hash calculations until they can be
analyzed and the code modified to accommodate them. 

-- 
---------------------------------------------------------------
William H. Geiger III  http://www.openpgp.net
Geiger Consulting    Cooking With Warp 4.0

Author of E-Secure - PGP Front End for MR/2 Ice
PGP & MR/2 the only way for secure e-mail.
OS/2 PGP 5.0 at: http://www.openpgp.net/pgp.html
Talk About PGP on IRC EFNet Channel: #pgp Nick: whgiii

Hi Jeff!! :)
---------------------------------------------------------------