I'm probably a bit biased since I wrote a bunch of the text, but I think we
did
a reasonable job of documenting the risks of active content back when MIME was
first specified. And yet active content in the form of email viruses is one of
the major operational problems, if not THE operational problem, we face in
email today. We did our bit and nobody listened.
I have a slightly different take - we did our bit and the major operating
system vendor listened and irresponsibly chose to ignore us, putting their
desire to have their applications' data formats (including executable content)
transferred over email ahead of their customers' need for security.
I was explicitly told this by a former program manager for that major
operating system vendor, though he didn't seem to think it was irresponsible.
He called it a marketing decision.
ah, but either the client has a direct connection to the MX or it has to
relay the message to some MTA which may or may not be under control of the
client. so the relaying case introduces the same problem that OPES had
when it was first proposed - that of having an intermediary make
transformations
to the content that aren't authorized by either party of the conversation.
The case I was thinking of was the one where caching of conneg information by
intermediaries eventually brings the information back to the server adjacent
to
the originating client. So the transformation is still done on the originating
client or on a server very close to it.
I think it's a stretch to believe that caches are going to solve this problem.
It's true that if A keeps sending messages to B then eventually A's MTA will
be able to cache B's recipient information. But this assumes that the
intervening MTAs support CONNEG (and cache CONNEG information) - which itself
seems like a stretch because, as you say, there's no business case for them
doing conversion. It's also a stretch to believe that this is going to work
well
for arbitrary pairs of senders and recipients - it might work for those pairs
who
exchange lots of messages, not so well for those pairs that occasionally
exchange
messages. And of course there's the problem that CONNEG information can be
expected to change over time, but there's no mechanism in the current proposal
to timeout caches. Frankly the method of returning capabilities in a bounce
message seems more effective and more reliable because it doesn't rely on
intermedaries.
The only way I see to make the CONNEG SMTP proposal work end-to-end is for
all of the intermediate MTAs (those that don't have direct knowledge of the
recipient capabilities) to open up SMTP connections to the next hop *in real
time* -
so when the sender asserts CONNEG in a RCPT command then the MTA has to stop
acting like a relayer and start acting like a proxy - finding a MTA for the
next hop, opening up a connection, sending EHLO and seeing if it supports
CONNEG, sending MAIL and all previous RCPTs, and finally sending the RCPT
that has the CONNEG option, just so that it can propagate the CONNEG response
back to the sender's UA. Even then it will be very slow, it introduces
race conditions due to timeouts, and it still requires all of the intermediate
MTAs to support CONNEG in proxy mode.
I haven't followed the fax discussions, but weren't the fax people looking for
a mechanism that would work predictably and reliably - e.g. if sender and
recipient both have (say) high-res capability then the document is always
transferred in high-res - and a mechanism that would give the sender the option
of requiring that the recipient have certain minimum capabilities before
sending
the message? (e.g. don't bother sending if recipient doesn't have color?)
I just don't see a way to make that work through the existing SMTP
infrastructure.
Keith