In <42447730(_dot_)3010200(_at_)ohmi(_dot_)org>, Radu Hociung wrote:
I'm proposing we count calls the the resolver library. Anything else is
In <4244694F(_dot_)3000608(_at_)ohmi(_dot_)org>, Radu Hociung wrote:
Let's call it unchallenged, not correct. It's completely up to the DNS
server implementation whether it sends information it wasn't asked for,
but suspects would be useful. bind9 does send out as much info as
possible, but apparently aol's NS servers do not send the additional
info (do nslookup -debug -type=mx aol.com dns-01.ns.aol.com).
Besides, if an MX contains a list of A records of many
long-host-names-as-MTA-servers.com, there may not be enough room in one
UDP packet the IP addreses of those hosts, and maybe not even enough
room for all the names.
The DNS server truncates the name list, and then round-robin rotates
them, to give the truncated out ones an equal opportunity to serve mail
requests. In this case, there would be no additional records, and each
subsequent A query will generate traffic.
This calls into question the usefulness of your calculations that showed
gigabytes transfered because of a high number of queries performed, er,
"calls to the resolver library".
It is not a one-to-one mapping of "calls to the resolver library" to
"bytes that traverse the public interface", because of the design of the
I'm still trying to figure out if we are concentrating on optimizing the
typical case (normal mail volume) or the atypical case (SPF-doom
One thing that makes the atypical case uninteresting for me is that it
exists ONLY because SMTP lets forgery happen. There will be a
transitioning period where a virus attack that uses the forgery vector
to propagate will be a attempted and it may be a big hit on DNS, but
because the hole has been closed, it won't work or at least won't be as
serious as it would have been with the hole still open (that is, there
will be new problems to solve, rather than revisiting the same ol'
problems again and again). Since the vector of attack is now closed, it
will be useless to attempt to exploit it.
This does not mean that new attack vectors won't be discovered -- such
as an attack against SPF (perhaps indirectly). If that happens,
hopefully the value of anti-forgery will have already been seen, and if
it is difficult, if not impossible, to close that new attack vector,
then SPF will be replaced with some other anti-forgery method. We have
not gone back to gopher just because web pages have increased our
bandwidth costs and required our servers to be beefier.
I am perfectly fine with a solution like RMX because I find much of the
SPF syntax to be sugary.
The syntactic sugar makes the SPF publisher's job simple at the expense
of SPF evaluators being more complex. All these dire predictions of
SPF's failure make it seem like weaknesses were purposely built into the
system, and now we're running out of fingers to stick in the dam. I am
not convinced that anything useful can be done to tip the scales in the
other direction (make the evaluation simpler) without losing the
syntactic sugar that helped put SPF ahead of other the proposals that
were/are on the table. How much of SPF's success-so-far is because
anyone can add their records with less than 15 minutes worth of work (if
those records are correct or not is another issue) -- it's this
simplicity that has gotten SPF the mindshare it has.
Please keep in mind that, to me, "a solution like RMX" includes any and
all systems which just list IP addresses of authorized senders, if
that's through an IP list encoded as ASCII in an SPF record using ip4
mechanisms, or an exists: or a: mech that points to a list of valid IPs
or a new RR like RMX.
If compiling SPF records to a list of IPs on a regular basis and putting
them in an SPF record is acceptable, then it is acceptable for ANY other
scheme also, compiling some meta-syntax into a list of RMX records for
All this talk about compiling records puts us back to where we were at
the beginning about needing to do more to/on DNS servers than just add a
record to a zone file, the ease of which is another reason SPF has wide
I think there's a lot of concentration on making everyone happy, and not
enough concentration on the actual problem of reducing forgery. If
people who have traditionally used forgery to send their mail are
inconvenienced under a new scheme, then they'll have to change. This is
the way progress works. There is something to be said for backward
compatibility, but not when the deployment of the backwards compatible
solution makes things worse (which is what Radu's numbers show). SPF is
almost TOO open and free-form, it is capable of describing too many
networks, including badly designed ones, it lets you do many of the
things you shouldn't do just so you don't have to change, and as such,
doesn't provide a migration path to doing things better (using SMTP
AUTH, narrowing your visible outbound mail sources, etc, if those these
are viewed as being "better" than not having them).
Unfortunately, we're here now, so we kind of have to live with what's
been provided. SPF is really starting to look like a dog's breakfast.
It does everything if you use it, and yet if you use it, it does
nothing, or sometimes, even worse than nothing.
Andy Bakun <spf(_at_)leave-it-to-grace(_dot_)com>