"Bonatti, Chris" <BonattiC(_at_)ieca(_dot_)com> writes:
I have a bit of a soft spot for "useful options". If they aren't implemented,
they'll die on their own. We never innovate if we don't strive, and all that.
A lot of proprietary compression structures DO store this value, so it seems
like there is some weight on the side of it being practical. I don't think
there are actually that many streaming scenarios in S/MIME. The dominant
scenario is still that of e-mail.
A number of users of my code are doing streaming S/MIME (email gateways and the
like) where they really have no idea how large a message is, and where they
can't afford to buffer more than a few hundred kB (I'm not trying to say here
that everyone should do X just because I see a need for it, just pointing out
that there is real use of streaming going on out there, in fact I've made
several changes to my code in the last year or so specifically to handle one-
pass streaming operations). Perhaps the dominant scenario for end-user-based
S/MIME is all-at-once send/receive, but for things like S/MIME gateways I'd say
there's a fair amount of streaming with one-pass processing constraints because
the average gateway can't handle the 1000 20MB Powerpoint files the salesdroids
just sent out in main memory.
Having said that I don't want to discount the option, but it'd need some
thought as to how you're going to manage interoperability - the receiver would
need to publish some sort of S/MIME capability info to say "I need to be sent
uncompressed size info".
The real problem with this is the timing. I thought this completed WG Last
Call back in November. Can we still influence this?
Not at the current state AFAIK and it doesn't seem like a terribly critical
change, however it shouldn't be too hard to do a short RFC which defines an
additional OID for foo-with-size-info if there's a real demand for it and if
the potential interop issues can be resolved (note that the current draft is
just a framework for doing compression with one standardised algorithm defined
to provide interoperability, there's nothing to stop anyone else from defining
their own algorithms and dropping those in alongside the existing one just like
you can do for the other content types).