ietf-mxcomp
[Top] [All Lists]

RE: Will SPF/Unified SPF/SenderID bring down the 'net?

2004-06-29 12:51:11

I am not going to play hunt the attack here. Each time I answer one set of
questions you redefine your position. Then you claim that your changed
position is unchanged and that it is my fault for misinterpreting the
original palimcest.

If you have 80k bots youcan cause real pain for pretty much any internet
site you choose. But the proposed attack is not the way to do it.

Clearly branching is not good if it is unbounded. It is not the branch ratio
that is the issue though it is the maximal permitted stack depth.







 -----Original Message-----
From:   Douglas Otis [mailto:dotis(_at_)mail-abuse(_dot_)org]
Sent:   Tue Jun 29 10:08:16 2004
To:     Hallam-Baker, Phillip
Cc:     'Matthew Elvey'; 'IETF MARID WG'
Subject:        RE: Will SPF/Unified SPF/SenderID bring down the 'net?

On Tue, 2004-06-29 at 03:53, Hallam-Baker, Phillip wrote:
80,000 spambots? Possible yes. Easy no way.

At 50 attacks a second this attack has revealled the ip addresses of the
entire cluster in half an hour.

How? You make this assertion but provide no methods as to how these
machines are to be identified.  The attack would not exist without also
legitimate machines also making requests.  How do you go about
separating the wheat from the chaff? 

It would be much easier to simply ddos the recipients public dns and make
them unreachable. That would require far fewer bots and would not require
the bots to use tcp and thus reveal their location. I doubt that many dns
servers outside core dns can survive a ddos atack from a hundred or so
broadband bots.

That is not the purpose of the attack however.

Even under these assumptons the attacker can only ddos 800 sites at once
with this cluster.

More machines will be offline for non attack reasons.  

Is this your way of saying it does not matter?

<snip>
So a DDoS attack on your own ability to send email. this can
be addressed by a security consideration. If you have to resolve
more than X records then consider the data spurious and reject
the mail.

Let me ask this again regarding the number of record indirections.  Do
you see a problem if there are on average 1.1 record indirections?  How
about 1.6,  2.1?  With these average indirections, what is the recursion
limits to resolve a permitted transversal path?  What algorithm defines
loop detection, tree pruning, etc?  What is the result if the tree is
pruned?

Exactly.  The goal would be to slow reception and thereby allow greater
distribution to a larger array of servers.  What is this limit?  What is
the average number of references to other domains?
<snip>

-Doug