[Top] [All Lists]

Re: Abort data transfer?

2009-10-21 16:42:05

Ned Freed wrote:

It just seems like bad design to assume some input will never happen.

Of course it is, which is why it's a good idea to have facilities in servers to address this. OTOH, implementations of such capabilities is a long way from rockey science, and obsessing over and overdesigning facilities to deal with
this is also bad design.

In this case, a DoS attack would not even require a botnet.  Even if the
receiver starts discarding data, an attacker with one machine could jam
the receiver with hundreds of SMTP processes, all waiting for a very
slow stream of data, just enough that they can't be swapped out to
consume less memory.

The slow data attack is really a different beast, and is somewhat harder
to deal with because timers and heuristics are involved.

It might, and I emphasize might, be worth adding some discussion of this case,
assuming we can come up with something that's both general and useful.

My guess is that well-established programs already know how to deal with
this.  In my own receiver, which is standard issue Sendmail plus my own
milter routines, I get a call after every header, and after every 60KB
body chunk.  If the number of chunks exceeds 20, I return a REJECT.
What Sendmail does with that REJECT, I have not verified, but I get no
further milter calls on that message.

I can't speak for anyone else's server, but we have both time and data-based disconnect limits. I've considered adding a minimum data/time, but concern over customer lack of understanding and abuse of such a limit has stopped me so far.

This looks to me like a security flaw in SMTP.

I dsiagree, or rather, if it is, it's a flaw in pretty much every protocol in existance. And since even if you lock down every possible case at the server level there are still very effective attacks at the TCP level, this is arguably
a flaw in the design of the entire stack.

Sure there are DoS problems at other layers of the stack - SYN floods being a good example - but these are outside our current scope. What I'm interested in is the application layer, and specifically SMTP. It seems to me that there should be a simple way to avoid any vulnerabilities added by SMTP. I can set limits on time and total data in my receiver, but if the transmitter ignores my REJECT, it becomes transport layer problem, and I hand it off to the firewall - block this IP address for 24 hours!

Seems to me RFC-5321 could have said: The client MUST pause every 1000KB (or 100 seconds, whichever is less) and look for a REJECT. Clients who fail to do this risk being blocked by a firewall.

100KB seems like a sensible size chunk. No need for a separate process monitoring the reply channel. No need to wait to make sure there is no REJECT on each chunk. A full handshake is not required. Just don't ignore any REJECTS that are sent.

-- Dave

<Prev in Thread] Current Thread [Next in Thread>