spf-discuss
[Top] [All Lists]

Re: new dimensions in stopping spam

2004-03-10 19:41:23
On Wed, Mar 10, 2004 at 03:00:58PM -0800, Greg Connor wrote:
| Minor quibbles:
| 
| >The space of IPv4 addresses is 4.2 billion bits wide.
| 
| Suggest:
| The space of IP addresses is wide - 32 bits or 4.2 billion possible 
| addresses (more with IPv6)
| 

fixed, thanks.

| 
| One point you might want to mention:
| 
| Spammers may turn to registering hundreds of domains for their own use.  In 
| this case, designated sender schemes can be used in conjunction with DNSBLs 
| to block all domain names that are served from the same DNS server.
| 

I'm not sure that blocking DNS servers will be useful as a long-term
strategy, because zombie machines can start running DNS servers
themselves, and then we'll have Yet Another MTAMark Proposal but aimed
at port 53 instead of port 25..

Blocking based on registrar might make more sense, but that opens the
door again to the mismatch between principal and provider which bedevils
the DNSBL field.  When spammers hide behind forgery, a reputation that
properly belongs to the principal instead attaches to the provider.
Domain-based authentication is a way to pierce that veil.

Greylisting, etc may be a valid way to impose a cost on spammers
churning through new domains.  Accreditation systems are another way to
express that cost.  If a legitimate new domains doesn't want to put up
with the cost of greylisting, it may prefer to pay an accreditation
service to vouch for it.

| 
| Another possible point, if you want to use it:
| 
| Some domains may choose not to publish designated sender info, or publish 
| too wide a range.  [Gray graphic with white spots can show a white stripe 
| all the way across].  Recipients can determine for themselves if this 
| policy is "too promiscuous" and either refuse the mail or downgrade/filter 
| it.
| 

This is true, but I see this strategy as mixing heuristics with
standards, which is something we are trying to get away from.  I think
it is better to try to play the standards game according to conformance
vs nonconformance rules, and to shift the questions of heuristics and
judgements into the domain of reputation systems where they belong.

Blocking overly promiscuous domains is a choice that should be made by a
reputation system.  That choice should be based on the observation that
spammers have chosen to exploit that promiscuity, not on the conjecture
that they might.

The difference is partly founded on the concept of "innocent until
proven guilty" and partly on the fact that, as any randy adolescent can
attest, spammers can game any logic that tries to detect promiscuity.  A
spammer domain may return one conservative SPF record to antispam
probes, and a promiscuous record to actual SMTP receivers.  Better to
put energy toward detection of actual spam.  Attempts to measure the
likelihood and potential that a domain may spam strikes me as the sort
of thing Schneier criticizes about airport security.

  http://www.schneier.com/essay-identification.html

Not that people won't try to detect promiscuity anyway :) ... it's human
nature.

  http://www.ajc.com/news/content/news/0304/10kiss.html