At 6:51 pm -0500 15/11/2007, John C Klensin wrote:
Unfortunately, graylisting is one of those techniques that works
well as long as sufficiently few people use it that the spammers
and bot architects don't feel motivated to go to the extra work
to overcome it.
I don't believe it to be a simple problem for the spammers and bot
architects to solve. Not only do they need to start keeping track of
state, they need to keep track of a lot of state (the volume of
messages they are trying to send is huge), which requires a lot more
resources, and they need to manage CPU and network resources such
that they are available for them to retry later. To some extent they
also need to make sure the receiving end has enough resources to
accept their connections, some other spammer overloading a server
will prevent them from sending spam to it. I consider resource
management to be the fundamental problem for spammers, as the bulk of
the is spam coming from bots using hijacked resources. They don't
have total control over those resources (eg: user can switch of the
machine, dynamic IP can change), and most importantly if they were
able to hijack those resources, someone else can. The later turns
this in to a Tragedy of the Commons type problem for the spammers.
My guess is that we have passed at least the
first version of that point: I'm seeing a rapidy increasing
number of spam messages arriving in a one-two sequence from the
same putative source. First one message is sent, then a second
is sent a few minutes later. That doesn't even require that the
bot maintain state, although graylisting that actually keeps
track of message headers or signatures will.
I'm using a graylisting delay of 25 minutes. Tens of thousands of
bots keeping sufficient state for more than 25 minutes, and then
coordinating things such that they can get through to my server on
the 24 connections allocated to them, would require quite a radical
change in behavior.
Of the ~150,000 unique IPs that have connected to my server in the
last two weeks, just over 70% of them only connected once.
This brings us back to the point I tried to make to Hector:
making these folks smarter may be unwise, especially when doing
so consumes more resources on our and and, with botnets, they
have essentially unlimited resources for which the costs to them
I consider spammers to have the kind of selfish mentality that could
never avoid being trapped by a Tragedy of the Commons. I can't see
spammers not fighting over bots, not hogging all the resources on
hijacked machines, and not hogging resources on receiving servers.
So, since you are graylisting already, by all means enjoy the
advantages as long as they last. But, given what I think we are
seeing already, don't expect them to last for a long time. And
don't ask that we change the standards to make them more
friendly to anti-spam techniques that can reasonably expected to
have a relatively short lifespan.
I definitely don't think the standards should change, I'm quite happy
with the current recommendation of 30 minutes to retry.