ietf
[Top] [All Lists]

RE: [jose] Secdir review of draft-ietf-jose-json-web-signature-31

2014-09-22 10:32:13
 

 

From: jose [mailto:jose-bounces(_at_)ietf(_dot_)org] On Behalf Of Richard Barnes
Sent: Sunday, September 21, 2014 5:32 PM
To: John Bradley
Cc: ietf(_at_)ietf(_dot_)org; secdir; Jim Schaad; Tero Kivinen; Michael Jones; 
IESG; jose(_at_)ietf(_dot_)org; 
draft-ietf-jose-json-web-signature(_dot_)all(_at_)tools(_dot_)ietf(_dot_)org
Subject: Re: [jose] Secdir review of draft-ietf-jose-json-web-signature-31

 

I think I may have erred by trying to write a treatise on which algorithms are 
vulnerable :)  Here's some updated text, trying to be more concise.

Jim: Your points about SHA-256 vs. SHA-512/256 and SHA-256 vs. SHA-3 don't 
really apply, since JOSE hasn't defined algorithm identifiers for SHA-512/256 
or SHA-3.

 

[JLS] Richard – are you planning to update this text when (not if) they are 
defined?  If not then this is still part of the problem even if currently not 
constrained.  The same could also be said to be not a problem for all of the 
ECDSA algorithms since there is only one hash defined of any given length.  (I 
will ignore the really fun problem for DSA and ECDSA where there is a modulus 
operation that occurs on the hash value thus creating collisions within the 
same hash function and making matching of hash function lengths and key lengths 
of primary importance.)  However, as these will almost certainly be defined in 
the future, they merit inclusion in the potential problems.   I believe that 
this should be included in the discussion as it is much easier to do than to 
break the mask function of RSA.  (Breaking the same hash function twice is very 
non-trival, having two hash functions that produce the same length hash is much 
easier.)


"""
# Signature Algorithm Protection

In some usages of JWS, there is a risk of algorithm substitution attacks, in 
which an attacker can use an existing signature value with a different 
signature algorithm to make it appear that a signer has signed something that 
he actually has not.  These attacks have been discussed in detail in the 
context of CMS {{RFC 6211}}.  The risk arises when all of the following are 
true:


* Verifiers of a signature support multiple algorithms of different strengths

* Given an existing signature, an attacker can find another payload that 
produces the same signature value with a weaker algorithm

* In particular, the payload crafted by the attacker is valid in a given 
application-layer context

For example, suppose a verifier is willing to accept both "PS256" and "PS384" 
as "alg" values, and a signer creates a signature using "PS256".  If the 
attacker can craft a payload that results in the same signature with SHA-256 as 
the signature with SHA-384 of the legitimate payload, then the "PS256" 
signature over the bogus payload will be the same as the "PS384" signature over 
the legitimate payload.

 

There are several ways for an application using JOSE to mitigate algorithm 
substitution attacks

The simplest mitigation is to not accept signatures using vulnerable 
algorithms: Algorithm substitution attacks do not arise for all signature 
algorithms.  The only algorithms defined in JWA 
{{I-D.ietf-jose-json-web-algorithms}} that may be vulnerable to algorithm 
substitution attacks is RSA-PSS ("PS256", etc.).  An implementation that does 
not support RSA-PSS is not vulnerable to algorithm substitution attacks.  
(Obviously, if other algorithms are added, then they may introduce new risks.)  

In addition, substitution attacks are only feasible if an attacker can compute 
pre-images for the weakest hash function accepted by the recipient.  All JOSE 
algorithms use SHA-2 hashes, for which there are no known pre-image attacks as 
of this writing.  Until there begin to be attacks against SHA-2 hashes, even a 
JOSE implementation that supports PSS is safe from substitution attacks.

 

Without restricting algorithms, there are also mitigations at the JOSE and 
application layer: At the level of JOSE, an application could require that the 
"alg" parameter be carried in the protected header.  (This is the approach 
taken by RFC 6211.)  The application could also include a field reflecting the 
algorithm in the application payload, and require that it be matched with the 
"alg" parameter during verification. (This is the approach taken by PKIX 
{{RFC5280}}.)

 

Of these mitigations, the only sure solution is the first, not to accept 
vulnerable algorithms.  Signing over the "alg" parameter (directly or 
indirectly) only makes the attacker's work more difficult, by requiring that 
the bogus payload also contain bogus information about the signing algorithm.  
They do not prevent attack by a sufficiently powerful attacker.
"""

<Prev in Thread] Current Thread [Next in Thread>