ietf
[Top] [All Lists]

Re: [TLS] Last Call: <draft-kanno-tls-camellia-00.txt> (Additionx

2011-03-14 15:31:29
On 03/14/2011 06:28 PM, Martin Rex wrote:
Nikos Mavrogiannopoulos wrote:

This sounds pretty awkward decision because HMAC per record is full
(e.g. 160-bits on SHA-1), but the MAC on the handshake message
"signature" is truncated to 96-bits. Why wasn't the record MAC
truncated as well? In any case saving few bytes per handshake
is much less of value than saving few bytes per record. Was
there any other rationale for truncation?

Are you wondering why the HMAC on the TLS data records is is sent
in its full beauty, while the TLS Finished.verify_data is a
truncated output of the PRF (which in the abstract definintion
uses HMAC as the outermost function, but in the case of TLSv1.0
is actually the XOR of two different HMACs over half the secret).
The reason might be about the "secret" input to the HMAC, which in
case of the TLS data records is a derived traffic key, while in
case of the Finished.verify_data, is the "master secret" of the session.

That was, what I assume, the fear, based on the second part of this
message from Dan Simon
   http://lists.w3.org/Archives/Public/ietf-tls/1996OctDec/0224.html
and the second part of this message from Hugo Krawczyk
   http://lists.w3.org/Archives/Public/ietf-tls/1996OctDec/0231.html 
Since the TLSv1.0 finished message was defined based on the output
of the TLS PRF (a function with indefinite output length),
defining a truncation was inevitable.  :)

Indeed. It seems the messages you list summarize that design decision
in a nice way. The concerns for the one-wayness of the MAC used lead
to that truncation. That way one-wayness is ensured by discarding data
at the cost of having a weaker MAC. I don't know if the current
construction can be extended for a longer size without implications.

regards,
Nikos
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>