spf-discuss
[Top] [All Lists]

Re: Re: It's published!

2004-10-17 15:58:07
In <4172DF23(_dot_)5987(_at_)xyzzy(_dot_)claranet(_dot_)de> Frank Ellermann 
<nobody(_at_)xyzzy(_dot_)claranet(_dot_)de> writes:

wayne wrote:

The intent of my limits is to do basically what you say:
Domain owners can count the number of mechanisms and tell if
they are within the limits.

And wizards, and validators.  I'm not sure about your limits.
T-Online has 8 MXs (each with one IP), Hotmail has 4 MXs (each
with 4 IPs).  I found no worse cases for AOL / ATT / GMX / UOL
and some other ISPs.  Whatever that means.

Your "10 MXs ought to be enough for everybody" sounds like the
famous 640 KB.

Well, yeah, except that there has effectively been that limit for the
last 20 years, and it hasn't seemed to cause many problems.

                And your example already squeezed 13 MXs into a
DNS answer (maybe, my nslookup didn't tell me what it actually
did before displaying its result, and I was too lazy to use its
debug options).

Yes, I sequeeze 13 MXs into one DNS packet by making sure my MX names
are short and works well with the DNS compression system.


"Count directives without ip4 or ip6" is a very simple recipe.

Yes, and that is basically what I propose.

"Use a global timeout of at least 20 seconds" is unreliable:

I wouldn't like a sender policy where my mail is "sometimes"
rejected with a PermError, but at other times it works.  I hate
intermittent bugs.

In all of the SPF drafts, a timeout causes a TempError, not a
PermError.  This means that you are supposed to reject with an STMP
4xx tempfail code, which should cause your legitimate email to be
queued and resent later.

Networks and name servers failures/overloads can intermittently cause
long delays and 20 second delays.  On the other hand, the spec not
allowing *any* timeout could cause very real DoS problems.


it is somewhat common to have more than 10 incoming MTAs
(AOL, Yahoo, etc. have hundreds), this is generally done by
creating many A records for each MX domain name

Yahoo uses apparently 4 MXs, each with up to 5 IPs.  But their
load balancing is irrelevant for SPF, that's "4" in your MX
count, "1" in my count of directives, and "1+4" for an overall
DNS query limit.

My limits would also count as "1" mechanism, just like yours.  My
limits *also* require that the number of MXes check be limited.


Back to the 1000 hosts names with one IP, how does this work ?
With virtual hosts like www.xyzzy.claranet.de it's a wildcard,
and the rest is handled by apache.  Dito SMTP for xyzzy.  But
you're not talking about a wildcard solution, or are you ?

To give a concrete example, the domains www.midwestcs.com,
elginwatches.org, elginwatches.com, trusted-forwarder.org and a few
others all have A records of 206.222.212.234.  If I was using djbdns,
234.212.222.206.in-addr.arpa, there would be a bunch of PTR RRs,
pointing back to the above domain names.

So, no, I'm not talking about wildcard solutions, just lots of
redundant DNS records.


My ISP returns one name for `nslookup -q=ptr 212.82.225.58`,
home.claranet.de, not www.xyzzy or 1000 other names.  What
happens in a "nslookup q=ptr" if you have 1000 PTR records ?
Is anything which doesn't fit in a packet silently ignored ?

Well, if DNS over TCP is supported by everyone (the NS, the resolver,
the firewalls, etc.), then you would get most, if not all of the PTR
records.  

all of those PTR references should work equally well for the
SPF ptr: mechanism since they all point back to the same
machine.  In the case of forged email, none of them will
work.

And if it's mixed, 1000 bogus PTRs trying to hide a valid PTR,
how does DNS handle this ?

It really doesn't make much sense to have a PTR RR for an IP address
that points to a name that, when looked up, won't return an A RR with
that IP address.  When would this ever happen?

But, even if there is a legitimate reason for this to happen, the 10
PTR limit simply requires that domain owner to not use the SPF ptr:
mechanism.  

it will be very rare for domain owners to even have to know
of this limit.

The wizards / validators have to tell them what's going on if
they try this stunt.  You said that 1000 names is a "normal"
case for some web hosters depending on their "zone editors"
(or whatever the name for these tools is).  Some domain owners
probably don't know this.

Yes, having lots of PTR RRs may happen, but for legitimate cases, even
the first one will almost certainly work.  I suggested a limit of 10
instead of 1 just so that in case someone screws up, or a name server
is down or something.


The limits are fairly easy for domain owners to understand

One overall DNS query limit is also "fairly easy".  But a PTR
with 1000 names cries for an explicit CAVEAT.  The draft says
only "not recommended" without mentioning this reason.

Well, the ptr: mechanism isn't recommended for many reasons.  The
folks running various NICs (ARIN, RIPE, APNIC, etc.) don't want a lot
more load on their rDNS name servers.  In cases of forged email, a
domain that uses ptr: is likely cause a PTR RR lookup of some broken
rDNS server.



If these limits (in whatever form) are generally useful for SPF
implementations (and I think they are) I want them in the draft
proposed as experimental RfC.

Well, you need to talk to Meng and Mark about that then.

I submitted almost the exact same text to Meng and he rejected it.  As
I mentioned before, I talked about the DoS problems in many messages,
including:
http://archives.listbox.com/spf-discuss(_at_)v2(_dot_)listbox(_dot_)com/200312/0393.html
http://archives.listbox.com/spf-discuss(_at_)v2(_dot_)listbox(_dot_)com/200404/0286.html
http://archives.listbox.com/spf-discuss(_at_)v2(_dot_)listbox(_dot_)com/200405/0083.html
http://www.imc.org/ietf-mxcomp/mail-archive/msg01263.html
http://www.imc.org/ietf-mxcomp/mail-archive/msg01944.html

Doug Otis posted a rant to the MARID list at least once an hour that
contained complaints about these DoS problems.

The problem isn't that Mark and Meng didn't use my exact verbage about
how to prevent DoS problems.  It isn't even that they didn't do
anything to create effective limits on their own.  The problem is that
they created a draft that, in the security considerations section,
poo-pooed the idea that SPF clients could be used for DoS attacks.

Mark and Meng didn't listen.  Or, at least, they didn't listen to me.
Maybe they will listen to you.


-wayne


<Prev in Thread] Current Thread [Next in Thread>