Brad Knowles wrote:
At 10:21 AM -0400 2003/09/08, Chris Lewis wrote:
We're considering greylisting as an adjunct to our filters. However,
since we have 8 inbound gateways, it could get rather messy. A
simple-minded implementation with a half hour delay would have a four
hour worst-case delay... Not acceptable.
Whereas a place like AOL (which had over 45 inbound gateways at the
time I left) would be much, much worse. This is why places with
multiple MXes should implement a shared central database for the
greylisting, so that all machines can benefit from the traffic seen by
the others.
"Simple-minded" was intended to mean "without a shared central database".
H'm, t'would be amusing to try this out with a much-hacked DNS server
doing the "sharing". Create a query like
"sender.ip.recipient.greylistzone", and let the DNS server do the counting.
The simple fact of the matter is that open proxy/socks code will
_not_ queue - so they won't try a second time[2]. I would strongly
suspect that if you made your greylisting timeout _zero_, and simply
400'd the first appearance of a given sender/IP/recipient tuple and
accept the next appearance, no matter how quickly, you'd still be
getting 90% of what greylisting with a very long timeout would give you.
Fair enough. But then you get people like Vernon Schryver who
complain that you're not compliant with RFC 2821 if the sender doesn't
implement a minimum thirty minute retry period, despite being unable to
provide a reference for this requirement.
I'm not the sender, only the recipient. So, I couldn't be in violation
of such a rule.
As if spammers care whether Vernon yells at them or not ;-)
At the very least, I'd like to see some testing showing how much
greylisting minimum required latency helps, or fails to help.
This is something I mean to try on our huge spamtrap. If only as a
pure-research project...
_______________________________________________
Asrg mailing list
Asrg(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/asrg