[Top] [All Lists]

Re: Abort data transfer?

2009-10-21 15:38:48

Paul Smith wrote:
> John R Levine wrote:
>> No, I meant where the data in a single DATA connection never stops,
>> since that is the alleged problem that this argument is about.
>> If, as I suspect, it has never happened in the entire history of the
>> Internet, why in the world is anyone wasting time worrying about it?
>> The scenario you describe is rare enough and has little enough impact
>> that you could deal with it by hand.
> I guess the thing is that it COULD be used as a DoS attack - quite
> easily. However, it doesn't seem to be, because there are lots of other
> ways of doing a DoS attack - and those are possibly better. A 'lasting
> forever' connection is more likely to be spotted by someone than lots of
> smaller connections, so I would suspect that botnet operators would
> prefer the latter to the former.
> A rotating set of 10,000 bots at a time flooding your server with small
> messages is less likely to be spotted or stoppable than a fixed set of
> 10,000 bots trying to send infinitely long messages. If nothing else,
> the latter will almost definitely NOT fill up the server's disk (most
> servers will start discarding data after it gets over a certain amount),
> whereas the former could well do so.

It just seems like bad design to assume some input will never happen.

Of course it is, which is why it's a good idea to have facilities in servers to
address this. OTOH, implementations of such capabilities is a long way from
rockey science, and obsessing over and overdesigning facilities to deal with
this is also bad design.

In this case, a DoS attack would not even require a botnet.  Even if the
receiver starts discarding data, an attacker with one machine could jam
the receiver with hundreds of SMTP processes, all waiting for a very
slow stream of data, just enough that they can't be swapped out to
consume less memory.

The slow data attack is really a different beast, and is somewhat harder
to deal with because timers and heuristics are involved.

It might, and I emphasize might, be worth adding some discussion of this case,
assuming we can come up with something that's both general and useful.

My guess is that well-established programs already know how to deal with
this.  In my own receiver, which is standard issue Sendmail plus my own
milter routines, I get a call after every header, and after every 60KB
body chunk.  If the number of chunks exceeds 20, I return a REJECT.
What Sendmail does with that REJECT, I have not verified, but I get no
further milter calls on that message.

I can't speak for anyone else's server, but we have both time and data-based
disconnect limits. I've considered adding a minimum data/time, but concern over
customer lack of understanding and abuse of such a limit has stopped me so far.

This looks to me like a security flaw in SMTP.

I dsiagree, or rather, if it is, it's a flaw in pretty much every protocol in
existance. And since even if you lock down every possible case at the server
level there are still very effective attacks at the TCP level, this is arguably
a flaw in the design of the entire stack.


<Prev in Thread] Current Thread [Next in Thread>