ietf-openpgp
[Top] [All Lists]

Re: Message Integrity

1999-04-21 17:40:20
On Wed, 21 Apr 1999, Jon Callas wrote:

My apologies for not jumping in here sooner.

The consensus that I've seen is against overloading message integrity on
signature packets. We also discussed it in Orlando, and there was great
consensus against it there. I confess that personally, I also question the
wisdom of separating them. Especially if it requires a shared key.

Or a well defined "anonymous" sign-only key, which is what I think Adam
Back was proposing.

A scheme that has been discussed, and I thought we had a lot of agreement
on even back in Orlando was to solve (1) and (2) with a new encrypted data
packet that used a standard encryption mode and appended a hash at the end.
I've been in discussion with several people who've offered several opinions
on what that hash can or should be, and I believe that the best thing to do
is just make it a SHA-1 hash, as a minimal OpenPGP implementation must
already have SHA-1 in it. No one who's done any work on it has come up with
a different solution that is better.

When was the Orlando meeting and who attended?  New packet types should be
avoided if only because you cannot tell what old implmentations will do
with them.  They would know that they cannot handle a new algorithm in an
existing packet type because they would know it was an encryption packet.

I don't think anyone objects to using SHA-1 for the purpose of a MIC.

I really don't like overloading MICs on signature packets. That's a kluge,
and it offends my sensitive architect's aesthetics. OpenPGP already has
quite enough kluges in it. I'd prefer to stomp on a few rather than add
another. Adding in shared keys makes it a worse kluge. I can't see what
goodness comes from a null-signature packet.

Throwing 20 bytes at the end of a stream instead of making it a packet is
a worse kluge (imho).  A standard anonymous key assigned to "MIC
validation only" wouldn't make it worse and would allow this functionality
in a method that is actually compatable with existing implementations.

The goodness is that the code is there (and hopefully tested), so that the
information - the hash result (which is already computed for signatures
when they are present) is simply inserted without the signature
computation step.  This takes only a few lines of code in most
implementations - the addition of a non-signature algorithm ID.  (I would
suggest ID 0 since it implied a null algorithm at one point).

We could be doing interoperability tests on MICs using existing algorithms
next week.

I've been waiting for someone else to create this packet, and I'm tired of
waiting, so here's *my* definition of it. This new packet is like Packet 9
in semantics but consists of:

- Encrypted data, the output of the selected symmetric-key cipher
       operating in (XXX) mode.

Use the existing packet.  Specify a new algorithm number.

Or will no one ever want to do 3DES, CAST, or IDEA encryption with MIC?

Will the REQUIRED algorithms include this new algorithm?  Unless you do,
this is the only way you are going to get a MIC.

If you don't, but want the MIC to be SHOULD, the new packet ID is an old
packet (maybe with other fixes) but with appended MIC.

And I don't see the new algorithm(s) being adopted that quickly.  When is
SSL going to have Twofish?

-  A 20-octet SHA-1 of the plaintext contents of the packet. Note that if
the contents of the encrypted packet is another packet (or packets), the
hash runs over the whole of them.

If you must do this outside of the existing signature packet, then make
just this the new packet, potentially with a corresponding 1-pass header
packet.

I confess that I am concerned about the possible implementation
difficulties here, but I'm also confused, because I don't see the problem.
Unless the packet is encoded with indefinite length, you know how long the
thing is. So you just subtract 20 from the length, and you know how much to
hash. I am willing to write in there that if the packet is coded with
indefinite length, the last chunk MUST include at least one byte of data
and the 20-byte hash. Does this help any implementation problems? Tom? Hal?
Werner?

Every encryption packet since 5.0 in many implementations uses the
indefinite length format.  So start by assuming every packet in this new
format is going to be (for the same reason Encrypters don't want to put
the 20 bytes up front - it requries a rewind operation so it won't work on
a stream - the old CTB/length had to be rewound to).

Your proposal adds a problem to the encryptor since the minimum indefinite
length byte is 512, so what if there are 500 bytes left?  You could simply
make the buffer 512+20 bytes and emit a end packet with the final length,
so it isn't that bad, but must be watched for.

Further, there is no "validation" routine at the end of the existing
decryptors.  The good/bad message handler will have to be added and
identify the encryption layer as detecting the fault.  If you complain
that signatures are a different class of validation, I would counter that
encryption is a different phylum.

But no one has said why the MIC can't simply be in its own packet, so I
get the virtual EOF for the encryption stream, then see the MIC packet
(i.e. a current signature packet with lots of stuff removed and a
different number) and pull 20 bytes from it.  So you have a separate
packet and packet handler.  It would be a MIC layer between the signature
and encryption layers.  Doing exactly one less step than the signature
layer.

Let me throw in one more complication.  The above formats generally have
fixed length, so you know where all the bytes are.  Extending to
validation packets would make the boundary where the end of message is
hard to find.

I don't understand how adding a new packet type with a new method
(encryption-cum-MIC) with new algorithm IDs and formats which are all
still undefined is less a kludge than simply using a new algorithm ID for
the new encryption algorithm, and specifying Algorithm 0 and storing the
SHA1 result as a plain MPI in the existing structure.

Everyone really wants new code, new layers, new specifiers?  I don't like
writing code that much, and I prefer simplicity.  And having existing
working, tested code as a base is better than ex-nihilo code.

Does anyone have any functional problems (e.g. security holes) with
modifying the existing signature packet into a validation packet?


<Prev in Thread] Current Thread [Next in Thread>