Re: RE: rr.com and SPF records
Alex van den Bogaerdt wrote:
So yes, you could save a few bytes in the SPF record, but it would cost
4 billion queries, plus the storage required to cache each one of those
queries. I don't know about you, but I get forged rr.com email all the time.
From how many _different_ addresses? And how many _different_ rr.com
customers? Not that 500 million, not by far.
If there were only 500K zombies and spammers that are willing to send as
rr.com, and if the local DNS server uses 100-bytes per cache entry, that
is 50MB of DNS cache used just for rr.com's exists mechanism. If there
were 1000 domains that used an exists mechanism, I'd need 50GB of space
just for the cache. Searching through it might be slow. Or, as any
decent cache should behave, perhaps older (useful) entries will be
expired to make room for these useless 188.8.131.52._spf.rr.com entries.
When cache entries are expired, traffic goes up as some of those records
are re-fetched later.
Now maybe there is a TTL of 2 days on these responses. This means that
the monthly traffic is more than 50MB/month for each domain. Multiply by
the number of domains who want to notify China that it has a virus, and
an innocent "exists" mechanism will cost the *recipient MTA* a bundle.
If the cost were solely the publisher's cost, it wouldn't be so bad. But
everyone else has to pay it too.
I made up the 100-byte figure, allowing for timestamp, TTL, SOA and
other data that is cached together with the actual query string and
... and so on.
How much disk space and bandwidth should I allocate to deal with that
virus in China? How much would you allocate?
Note how a similar problem occurs with RBLs, except there, each zombie
IP address only occupies 100-bytes of cache. In the case of SPF, this
cost is multiplied for each domain that uses exists.
I almost think that the exists mechanism is a mistake, and should be
Also, I think we shouldn't recommend anything to one domain that would
be too expensive if everyone implemented the same solution. We need