ietf-mxcomp
[Top] [All Lists]

RE: Will SPF/Unified SPF/SenderID bring down the 'net?

2004-06-29 03:53:49

80,000 spambots? Possible yes. Easy no way.

At 50 attacks a second this attack has revealled the ip addresses of the
entire cluster in half an hour.

It would be much easier to simply ddos the recipients public dns and make
them unreachable. That would require far fewer bots and would not require
the bots to use tcp and thus reveal their location. I doubt that many dns
servers outside core dns can survive a ddos atack from a hundred or so
broadband bots.

Even under these assumptons the attacker can only ddos 800 sites at once
with this cluster.

More machines will be offline for non attack reasons.  




 -----Original Message-----
From:   Douglas Otis [mailto:dotis(_at_)mail-abuse(_dot_)org]
Sent:   Mon Jun 28 21:22:21 2004
To:     Hallam-Baker, Phillip
Cc:     'Matthew Elvey'; 'IETF MARID WG'
Subject:        RE: Will SPF/Unified SPF/SenderID bring down the 'net?

On Mon, 2004-06-28 at 19:53, Hallam-Baker, Phillip wrote:
I fail to see the connection between an argument that a 20% 

That's not what I'm talking about.  Consider the whole post, 
particularly paragraphs 1-3.  Arrgh.
The URL is above.

How about you or Doug try to post a clear attack scenario
that shows exactly how such an attack would be performed?

I thought it was clear.

I have great difficulty parsing the paragraphs, let alone making
sense of statements such as 

"If used in the normal fashion, no parser is required to utilize DNS
answers, so the introduction of a parser increases vulnerabilities. "

I could have said it takes no "additional" parser to utilize A records
or SRV records or many other binary responses offered by DNS.

It is a matter of indisputable fact that all interpretation of
DNS records requires the use of some form of parser. So the 
above appears to be nonsense.

But the task to parse a SPF requires an additional parser added where
either the increase in resources needed or errors created handling
malformed input exits where there would have been none if such a parser
was not added.

"The
less rigid and more extensible the syntax, the greater the
vulnerabilities. "

It is also a matter of indisputable fact that XML can be parsed
using a finite state machine of less than 20 states acompanied by
a simple stack. I have written such a parser several times.

The larger these records are allowed to grow or the greater their
numbers, the greater their risk. I thought the XML issue was resolved. 
I have yet to see any evidence of this however and this question seems
odd in that respect.

<snip>
"An attacker "jamming" the checking mechanism might set up DNS servers
for domains they control that respond erratically and offer complex
record sets with small TTLs."

So a DDoS attack on your own ability to send email. this can
be addressed by a security consideration. If you have to resolve
more than X records then consider the data spurious and reject
the mail.

Exactly.  The goal would be to slow reception and thereby allow greater
distribution to a larger array of servers.  What is this limit?  What is
the average number of references to other domains?

"As example, a mail server is receiving 50 messages per second that
average 4 K bytes in size."

Assume that three contain powerpoint presentations of 1Mb, four
contain word documents of 500Kb and five pictures of little Timmy.
that would be a more realistic load, but it would prevent the
dire conclusion

This was not assuming a bandwidth limitation.  I only attempted to
illustrate that at half the number of messages, the traffic was not
changed.

"These 10 queries will also add to the traffic at 350 bytes per record a
total of 4K bytes of additional traffic for a doubling of the network
load."

assuming the attack is present on every email. And the mail server 
does not get clever and stop accepting emails from malicious IP addresses.

This could easily be happening from 80,000 addresses.  This is a small
number out of millions possible.  I would not expect too much effort
made from normal tactics however.

Estimates of the number of compromised hosts on the Internet vary, but 
the highest number in a botnet tends to be in the tens of thousands.
If the attacker is using a different bot for each connection that
is 3000 bots per minute, 180,000 per hour.

Such an estimate is low, but using your 10,000 with just 56 k baud links
and .4k per message, that would be 175,000 messages per second.  If able
to reduce mail performance to 25 messages per second, the attack could
hinder 7000 mail servers.  Bad news for someone I would suspect.

This is an attack I really really would like to see, we could map 
out the zombies on the Internet pretty quick.

How?  You may have captured a series of dynamic IP addresses and others
that simply were transferring real mail.  Of these messages, none were
rejected as they contain nothing to indicate them to be spam.  None of
the domains in the return path were from closed lists.  You will have
made little progress in knowing anything for certain.

[Why not just DDoS the email server direct and have done with it????]

The attack is to convince providers SPF is not worth it.

<snip>
If malicious.com or compromised.com have a malicious record in
their DNS it should become apparent quite quickly.
 

I don't see how.  Say 9/10s of a domain's record resolves. 
How is this domain identifiable as malicious?
Maybe it's just under attack.

If it is under attack then it probably should be ignored until
it recovers. All the mail 'from' that source is most likely spam
anyway.

If there is a slow response to a DNS query, then this identifies the
source as a spammer?  What type of behavior are you recommending?

Also remember that the sender has to be holding an open TCP
session during this process with a known source IP port. This
is not exactly an anonymous attack.

Again, how is a malicious actor identified automatically?

By logging the IP address of the source of the attack.

How do you identify the attacker?  What is different?  
  
The argument makes no sense unless you can state which part of the
DNS is going to be attacked and how such an attack would make it
easier to send spam.
 
Same way taking down BLs works.  It makes 'em problematic (e.g 
unreliable and/or very resource intensive), so folks stop using 'em.

BLs have a single point of failure that is similar to the problem
of running core DNS, you take down one part of the network and in
time the rest of the net grinds to a halt.

You have failled to show that there is a dependency that looks anything 
like the dependency that a mail server has on a BL or on core DNS.

I would say there is no analogy with a BL and an MTA checking SPF
records.  A BL can be scaled to handle DoS attacks as there would be an
interest to ensure such. Those running mail will now find themselves
under attack as a result of a flood of DNS queries resulting from
spoofed mail to MTAs employing the SPF mechanism.
 
Seems to me that any such attack has the problem that the attack
and likely motive become immediately obvious to sender.com. All
we need to do is to work out a way to close the loop.  

And how do we do that?

Reporting mechanism to allow sender.com to tell receiver.com that it
is observing large numbers of packets that appear to be emitted 
from that domain.

A reporting mechanism to increase the MTA load during a possible attack
that can not identify the source of the attack, but knows it is being
attacked because it is taking too long to query DNS?  What is the source
of a well disguised distributed attack anyway?

-Doug