I hate to spoil a good party, but it seems to me there are several minor flaws
and one major flaw with all the proposed methods of reducing spam through
authenticating senders, which will either prevent their adoption or render
them useless once widely adopted.
SPF, for instance, breaks forwarding. This will stop it catching on widely,
because domain owners are likely to use forwarding for their incoming mail.
If their ISP adopts SPF they could lose all incoming mail from domains with
an SPF record. To prevent this, the forwarding company, over which the domain
owner may have no control, must take action. So we have a proposed system
which requires unrelated entities to take co-ordinated action if it is not to
disrupt mail delivery to the very people responsible for enabling it. This
could be fixed by adding a Recipient Protocol Framework (my name - apologies
if something called this already exists) to the standard, specifying servers
which were to PASS all mail addressed To: (or Cc: Envelope-To: etc) the
domain without checking the sender's domain. The RPF record could then be set
up by the domain owner, either as a separate string or an entry in the same
string, at the same time as the SPF record, and so forwarding could be
enabled by the person who should control it- the domain owner, without
worrying about whether one company or another had implemented the checks.
However, this is only a minor flaw.
Much more significant is the fact that all these systems depend on publishing
information. What is published for the use of the good guys is also available
to the baddies, and that's the real problem. For among the hundreds (or, more
likely thousands) of requests for SPF records name servers would receive from
MTA's every second could be hundreds of identical requests from a data-mining
program seeking to build a reverse-lookup database. Once the spammers have
such a database it will be relatively simple for them to write software which
spoofs credible domains for every spam injection point they use. And where
will that leave us? Back where we started, but with increased network traffic
and DNS load doing useless checks.
Certainly, my experience with spam is that, wherever it appears to come from,
there are only five or six basic variants being sent repeatedly at any one
time. Therefore, it is likely all this mail is coming from only five or six
real sources which have managed to infiltrate the Internet at a large number
of points. Do we really imagine people who do that will be put off by the
need to run a little more automated research?
Identifying spammers (or, more likely, trojanised mailers) and their reply
websites (or, again more likely, trojanised proxies) is only as good as the
abuse desks which take action, and overstretched resources take time to close
down these gateways. The spammers know this, and presumably make their profit
in the intervening interval, and are always ready to move on. They are the
Internet's suitcase boys, standing on a street corner and ready to run when a
police officer comes within sight. To make spamming and the associated
activities unprofitable requires an automated system which either cannot be
abused, or which punishes abuse so rapidly it cannot be worthwhile.
I don't want to be negative, but it seems evident that a lot of highly
talented effort is being directed at something which is not a solution, and
that effort could be better used devising something more radical and
effective. I realise such a solution might be off-topic for this list. I have
no ideas at present, I'm afraid.
I was directed to this list by the spf-discuss sign-up response.
Best wishes,
K.J. Petrie.