spf-discuss
[Top] [All Lists]

RE: SPF-compiling DNS Server

2005-03-24 15:45:53
From: owner-spf-discuss(_at_)v2(_dot_)listbox(_dot_)com [mailto:owner-spf-
discuss(_at_)v2(_dot_)listbox(_dot_)com] On Behalf Of Radu Hociung
Sent: Thursday, March 24, 2005 2:47 PM
To: spf-discuss(_at_)v2(_dot_)listbox(_dot_)com
Subject: Re: [spf-discuss] SPF-compiling DNS Server

David MacQuigg wrote:
At 11:32 AM 3/24/2005 -0600, you wrote:

On Thu, 2005-03-24 at 11:03, David MacQuigg wrote:
In addition to patches for the various name servers, it would be
nice to
have something that could be deployed rapidly, without even stopping
a
running DNS server.

I'm not sure what "stopping a running DNS server" means in this
context.  DNS hosters who provide web interfaces to DNS zone
maintenance
already effectively do just that, in as much as the person editing the
records doesn't need to do a kill -HUP or anything, they just need to
hit a button on a web form.


Patching a DNS server would be more disruptive than necessary.
Installing a daemon to automatically update SPF records could be done
without any disruption.  The web interface you mention above is a good
example.

If we make this nice enough, admins will update their records long
before
SPF-doom arrives.

Update them to what?  This putting the cart before the horse.


Update from an inefficient, manually-written SPF record to a compiled
record, generated automatically from the original SPF record, or its
subsequent updates.  The update is the horse.  The cart is an efficient
system for doing SPF queries, one that will never tempt virus writers to
try an attack.  If updating were a big burden, then you might say, let's
wait until we see an attack, then push everyone to update.  If we make
it simple and fun, many admins will update, just because it is easier
than what they did before.

I would discurage such a game of wait-and-you'll-see. Instead, a focused
  and well thought out education campaign would be a better way, I feel.
Council, any thoughts/initiative on this?

Any updates have to be compatible with the current spec (or are we
thinking that there is some holy-grail of SPF mechanisms that is low
DNS
load and completely describes the domain that we should add to the
spec?)  Any changes to the DNS software to make them SPF aware, which
seems much more likely as being backward compatible, is not something
that someone who edits their zone files via web interface their DNS
service providers provides can do.

If there is any change that can happen to reduce the chances of this
mythical "SPF-doom" virus from having a significant impact, it is to
make a low-load SPF record that is all ipv4.  But I thought we already
agreed that that removes the things that make SPF SPF and that's not
what we want.  Are we pushing for RMX in that case?  If so, then lets
just use RMX and be done with it.


As I understand it, the benefits of all the inefficient SPF mechanisms
are for convenience in setting up SPF records.  The only thing needed by
the SPF checker is a list of IP addresses (or blocks).  The
SPF-compiling daemon provides a versatile syntax for the user, and an
efficient syntax for the checker.  None of this requires changing the
SPF spec.

Where is the middle ground?  If using any mechanisms other than ipv4
(and ipv6, of course) then the system has a DNS load/query
amplification
issue.  How much amplification is acceptable?  I'm reading that none is
acceptable.  In some cases, the amplification is directly related to
accepted DNS configuration and currently deployed network topologies
(mx:aol.com amplifies to greater than mx:leave-it-to-grace.com because
there are more queries and records behind aol's mx).  Do none of the
other benefits of the SPF syntax and (comparatively) rich mechanisms
outweigh this possibility of an SPF-doom attack, an attack that may be
on SPF (either technically or socially) but isn't actually SPF's fault?

Acceptable amplification. Great question.

Sendmail currently does at least 3 queries for every incoming
connection. (PTR against the IP, A against the resulting name, and A
against the MAIL FROM domain).

The A query against the MAIL FROM domain is only looking for NXDOMAIN or
a real response (even empty), in order to weed out domains that do not
exist. This query should be replaced with the first TXT query of the SPF
check. If it returns NXDOMAIN, sendmail rejects the mail as before. If
not NXDOMAIN, then we have an SPF result (none or something else).

So far, we have added no more DNS load than we currently have. The
incremental cost of SPF starts when we expand the first DNS mechanism.
If the only DNS mechanism is a PTR, that is nearly free, because the
result to that is already in the cache since the query done before EHLO.

The amplification caused by SPF only starts at the first non-PTR
mechanism expanded in a policy.

Maybe I'm misunderstanding something, but can we not keep all the mx
lookups, etc. in the user syntax, and still have the compiled record be
nothing but a list of IPs?

That would be the best idea.

If you compile the record, you should re-compile it when any info changes.
The mx info could change, any includes, redirect, exists...  This is why I
think it is best if the DNS server handle the compile.  It can set the TTL
of the compiled SPF record equal to the shortest remaining TTL of any source
used.  Then when it expires, re-compile.

In my case, if my ISP ever publishes an SPF record, I could start using
include.  But they will not tell me when they add another SMTP server.  If I
was using a static compiled SPF record, my mail would bounce, then I would
re-compile my SPF record.  Not a very good process.

If you want to manually deal with changes when your mail starts to bounce,
that is a business decision.  As someone else said, a cron could re-compile
each night.  Should be an easy script.  I would use cron until DNS has a
built-in compiler.

Guy


<Prev in Thread] Current Thread [Next in Thread>