William Whyte <wwhyte(_at_)baltimore(_dot_)ie> writes:
Currently both RFC 2459 and CMS refer to RFC 2313/2437 for the encoding of RSA
signatures/encrypted data (RFC 2459, 7.2.1; CMS, 18.104.22.168 and 12.2.2 - what
about to describe applies to other algorithms as well, but I'll stick with RSA
to keep it simple). These RFC's make the assumption that the encoded value
will be of the same length as the modulus, zero-padding the value if required
(RFC 2437, 7.2.1 and 8.1.1), however when this padding is used the encoded
value doesn't follow the DER any more.
I'm not sure this is right. The signature is an octet string or a bit string,
not an integer, and it's perfectly legal to have an OCTET STRING or BIT STRING
with leading null bytes.
Ah, of course! This only leaves the signatures which have internal structure
(eg DSA, a SEQUENCE containing two INTEGER's), and they have their own rules
which don't clash with RFC 2459/CMS.
(Didn't PKIX at one point include a requirement for DH values, encoded as bit
strings, to be shifted up so the MSB was the first nonzero bit in the string,
thankfully this vanished soon after it appeared because it would've been a
right pain to implement)
Is anyone aware of any implementations which break if the signature/encrypted
data value isn't padded out as required? You'd probably have to go out of your
way to write code which does this, I'm wondering whether any code does actually
complain if the data isn't exactly the right size. The reason I'm asking is
that I've always encoded things in the minimum number of bytes (as if it was a
DER INTEGER) rather than padding with zeroes which so far hasn't caused any