Paul Smith <paul(_at_)pscs(_dot_)co(_dot_)uk> wrote:
On Sat, 17 Sep 2005 10:55:07 +0100, John Leslie <john(_at_)jlc(_dot_)net>
after DATA -- instead, IMHO, folks are saying that receiving SMTP
servers that _average_ more than a few seconds per email after DATA
risk running out of resources: thus you shouldn't do this unless you
know what you're doing.
Actually, they are saying that SMTP servers that average more than a
few seconds per email after DATA risk *making the client* run out of
resources.... The servers can manage resources easily enough by
restricting incoming connections etc if they need to.
This is, potentially, a good DoS attack on the client.
(I shall try to be gentle...)
We cannot rewrite rfc2821 to protect sending SMTP clients from slow
receiving SMTP servers, and (IMHO) we should not try.
2821bis is defining a _protocol_ for clients and servers to talk to
each other. It recognizes a need for timeouts, and sets some bounds on
them. If this were a WG charged with "improving" SMTP, it would be
appropriate to argue whether those bounds should change -- though I
personally doubt changing them would improve much with legacy clients
and servers remaining part of the system for at least five years.
We are not a WG. We are not charged with "improving" SMTP. Any
changes we might make will take five years or more to percolate into
Furthermore, _servers_ which restrict incoming connections risk
losing emails; while _clients_ which restrict outgoing connections
still have other ways to get the email out. (In particular, though
it's quite orthogonal to the SMTP protocol, clients could sort into
different queues based on experience with the slowness of the servers
they send to.)
I believe we already recognize there may be situations in which
clients need to enforce timeouts faster than the recommendation. Also,
we recognize that clients may choose to limit retransmission attempts.
These are not protocol issues: the protocol specifies how client and
server will interact when they _do_ use the SMTP protocol.
If the 'keep alive' *trick* discussed here does work, you could
potentially make a client at somewhere like Yahoo or somewhere open
up 100 connections to a dummy mail server to receive mail. That mail
server then 'keeps alive' those connections indefinitely, stopping
the mail client from sending mail to anyone else.
Thus, clients _now_ must protect themselves against this threat.
It's perfectly OK for you to argue that this is a nasty trick; and
were anyone to propose a change to 2821 to _require_ a longer timeout
to final (non-continuation) reply, it would be appropriate to argue
against that. But in the meantime, clients should use available means
to protect themselves.
But nothing we could write into 2821bis would be sufficient to
protect you from this possible threat, and IMHO we shouldn't even
be trying to. I'd argue that currently-available means are sufficient.
That's why I think the RFC 2821 timeout should be specified as being
to the *final* reply code. It doesn't look as if that's the way it's
implemented at the moment in many cases, which could lead to a DoS
attack as described above.
Then it's a problem of implementation, not of specification. An
implementor is free to impose a timeout on the entire process, at the
server end and/or the client end.
John Leslie <john(_at_)jlc(_dot_)net>