Trevor Perrin <Tperrin(_at_)sigaba(_dot_)com> writes:
I still don't understand why you can't do two layers in a single pass. I'd
assume that at a certain point the inner layer is finished with the previous X
bytes, so it could pass control to the outer layer who could chew on them, and
when it's consumed that chunk, pass control back to the inner layer, etc..
But I don't really know anything about ASN.1, so never mind..
I've been going through this in private mail with someone else, so I'll be lazy
and cut&paste:
-- Snip --
Changing a single byte of inner content can change several bytes of outer
content due to variable-length length-of-length encoding, and there are some
data lengths which can never be achieved, eg. when you move from a short-
encoded data length to a long-encoded one and adding a single byte to the data
also increases the length-of-length value. When you combine this with data
blocking requirements and requirements for PKCS #5 padding, it becomes
unworkably complex to implement and test unless you buffer an entire message,
in which case you're just doing a standard two-pass encoding.
indefinite length constructed form.
That's the magic phrase. If you use definite-length encoding, it becomes an
unsolveable problem in the general case. You have to do this (use the use
definite-length form) because there are implementations which can't process
indefinite-length data ("Gee, we never thought anyone would use that!") or
process it really badly (bad buffer management, "Why is our code two orders of
magnitude slower than yours?").
-- Snip --
Peter.