Eric Rescorla wrote:
If we move in a new, stronger crypto-algorithm, then we should not
unreasonably spoil its properties.
Truncating a SHA-384 based PRF to 12 octets is like using
an sha384WithRsaEncryption signature with a 1024 bit RSA key,
it is an imbalanced pairing of algorithms&keys.
Again, I don't understand this: TLS already (as of TLS 1.0) truncated
the PRF down to 12 bytes, so we are already producing an output that is
substantially shorter than the digest that is the basis of the function.
If the security arguments for why that is good are valid (and FWIW I
think they are) then as far as I can tell they are
equally applicable to SHA-384.
The truncation of hashes/PRFs/HMACs is a trade-off.
A trade-off between collision-resistance and how much clue
is provided about the input.
A (20/0) trade-off provides the smalles possible clue,
but completely spoils the collision-resistance (i.e. it becomes
useless for verification purposes).
A (20/20) trade-off retains the full collision resistance,
but provides the largest amount of clue for verification
of input paramters.
The trade-off that was chosen for SHA-1 was (20/12) or (160/96).
which is 1.666666666
TLS uses a different trade-off, but I'm not aware of any rationale
why the original trade-off in TLSv1.0/TLSv1.1 should have been
inappropriate. The logical choice in TLSv1.2 would therefore
have been to use (32/20) = 1.6 for a PRF that uses SHA-256.
And any _other_ trade-off (such as the one in rfc-5246) should
require a _new_ justification.
Ietf mailing list