[Top] [All Lists]

Re: [ietf-smtp] Fwd: New Version Notification for draft-fenton-smtp-require-tls-00.txt

2016-01-11 10:10:45

--On Monday, January 11, 2016 10:14 +0100 "Rolf E. Sonneveld"
<R(_dot_)E(_dot_)Sonneveld(_at_)sonnection(_dot_)nl> wrote:

Now, suppose wants to send a message to
user(_at_)example(_dot_)com and wants to send it with transport
encryption. RFC 5321 is fairly clear that it is to pick
arbitrarily between A and B.  If it happens to select A, then
all is well.   But, if it selects B and successfully opens
the SMTP connection, B will not advertise REQUIRETLS and your
spec, as I read it, then requires that D stop, close the
session, and tell the originator that delivery was not

Isn't this a generic problem when MX hosts for a domain provide
different 'levels of service'? I mean: if MX host A advertises
a SIZE of 10 Mbyte and MX host B advertises a SIZE of 15
Mbyte, a message of 12 Mbyte may reach or not reach the
destination, dependent on which MX host is used to transfer
the message.

And to a lesser extent, the difference in 'service' applies
for support of DSN and other extensions as well. Isn't it
primarily the responsibility of the domain owner to use MX
hosts with equal characteristics? Granted, the Sender may
experience inconsistent behaviour when the Receiver chooses to
use MX hosts with different characteristics, but the question
is: do we have to take this into account when designing (new)
protocols? I think the answer is 'yes, but to what extent'?

Right.   Of course it is.  But, at least from my point of view,
we've historically optimized Internet email in favor of messages
getting through.  If someone needs to make a choice between more
robustness through absolutely consistent service offerings,
we've generally chosen the robustness.  If we've had a choice
between a cruddy gateway and not being able to move messages
between some other system and Internet mail, we've usually
embraced the cruddy gateway.  Our historical response to a
message with fouled-up headers has been to deliver it based on
envelope information and let the recipient system sort things
out.  Until MIME and the SMTP extension model came along, we
didn't even have a clear requirement that messages contain
822-ish bodies to be delivered (although there were wide
differences in interpretation about what to do when they

Postel's robustness principle has been really important to the
design and our standards have been written on a model of "if you
deviate, the behavior is undefined and we don't make any
promises about what will happen" rather than "follow our rules
or abandon hope of message delivery".

We've taken several steps away from those principles in recent
years, including the notion that it is ok for relays to open
messages, review them for signs of spam, malware, and other bad
behavior, and treat the messages harshly if those signs appear.
But, for legitimate messages, we've mostly still been on track
with "try to deliver it no matter how messed up things are"
Even SIZE was thought of as more of a "these are my limitations,
try to adjust to them" statement than as a prohibition (you may
recall that it was specified partially in the context of
assumptions that message/partial would be adopted and deployed,
allowing a sender who encountered size limitations to break the
message into chunks rather than bounding/rejecting it).

I see moves in the direction of "if you don't implement this,
the mail isn't going to go through" as bringing two problems
with them:

(i) They will tend to divide the Internet into islands,
separating those who implement from those who haven't or refuse
to (or who are prohibited to do so by some regulatory process).
We can probably stand an island or two, especially when their
boundaries are transitional and inclusive, but the problem with
islands of this type is that they are determined by the
intersection among requirements rather than being independent
with each function.   Certainly there were several alternatives
to the Internet (not just for email) who have had experience
with islands determined by option and parameter profiles.  They
became, in Marshall Rose's words, the road kill on the
information superhighway.

(ii)  They reinforce any inclinations powerful providers might
have to say "I've got 5 gadzillion mailboxes; if you want to
communicate with any of them, you are going to use my protocol
profile and options".   If different of those providers adopt
different profiles, we fall back to the days where mail sending
systems have to know which destinations maintain which profiles
and adjust what they send accordingly (the WKS experience
notwithstanding, we probably know how to do that, but it is a
pain and maybe not as robust as we would like, especially where
distribution lists or mailing list exploders are concerned).
More important, it risks creating an environment in which users
have to maintain mailboxes on separate systems to communicate
with the users of those system.  Any of those of us who have
been there (and I think that likely includes you and Jim)
understand why it is not a good way to have to live.

Now that doesn't mean we shouldn't do it if it is really
important.  The email i18n community concluded that, for them,
the SMTPUTF8 extensions actually were.   Frankly, I still wonder
whether we would have been better off sticking with basic Latin
addresses and focusing more on 5322 display names and maybe a
non-mandatory SMTP extension to include those names in the
envelope and, even though I usually think they/we made the right
decision, I will probably continue to wonder until SMTPUTF8 is
so widely deployed that we don't need to worry about that as an
island and more.    

However, for each new mandatory, "this may create another set of
islands and it narrows my configuration options and choices and
management of fallback MX servers" feature, I think we have to
think at least as carefully about whether the value-added is
real and whether it justifies the costs and risks to Internet
mail as a whole as about whether we can make the feature work
among cooperating parties.  I know this puts me in the minority
in the IETF right now, but I don't think that "improves privacy"
is sufficient to justify those costs, no matter how high or
risky they are.  In that context, analyses like John Levine's
that address the question of how much incremental privacy such a
feature buys and my continuing concern that, for most types of
attackers, it has historically been much easier to compromise
hosts and servers than the transit network are really important,
even if they just contribute to a better understanding of the
attacks we are protecting against or the cost-benefit tradeoffs.


ietf-smtp mailing list

<Prev in Thread] Current Thread [Next in Thread>