Re: Use of New Mask Mechanism
2005-03-26 16:49:17
David MacQuigg wrote:
At 11:53 AM 3/26/2005 -0500, Radu wrote:
David MacQuigg wrote:
Masks should be added to a list of IPs, only if that list is already
too long to fit in one 512-byte DNS response message. In this case,
a mask may allow the SPF check to return a FAIL without initiating a
TCP connection to retrieve the full DNS message.
The compiler currently has an option to specify the max length of an
output SPF string, in case a name server has a lower limit for the
length of TXT records.
So the number of characters allowed by the name server software will
dictate the max length of string that the compiler is configured to
produce. Since the mask has to be in the first record, it will cause
the compiler to shorten the number of bytes used for mechanisms such
that the the top record, including the mask and any other modifiers
fits within the server's TXT record limit.
I am afraid it won't be so easy to provide a clear number like "450".
Bind insists on providing the NS records for the zone with every
response. The more you have, the less room is available for the TXT
record. Also when the domain name is longer, that takes some space
away too. Ie, in the response packet of _s4.ohmi.org, the name takes
up 13 bytes, but for a _s4.longer-domain.name.com, it takes more
space, leaving less available for the TXT record.
It looks to me like it will take some work to figure out how many
bytes are available for the TXT record, and what all the variables are.
At ohmi I use 3 name servers and BIND, and the biggest TXT record I
can fit into a 512-byte UDP packet is 357 bytes. YMMV, but it proves
that even a seemingly conservative limit (450 or 400) is not always
appropriate. If I had more slave name servers or a longer domain name,
my usable TXT space would be even less.
So I suspect that for sites that compile their record with a cron job,
they should find a value for the -len parameter that works on their
system. Alternately, the compiler should automatically figure it out.
If we want the compiler to run independently of any nameserver, and
avoid the problems we will encounter with patches or upgrades, we need a
very simple procedure to determine the record length. If your first SPF
record is 200 characters, and the resulting DNS packet is 250, then you
know the "overhead" is 50 characters, and you should set the maximum
length for the compiler at (512 - 50).
I'm not familiar with the operational details of nameservers. Is the
above procedure something we can recommend?
It's definately not that simple. Allow me to explain:
1. The DNS server patch should not break other TXT records. Existing
non-SPF1 TXT records should be unaffected.
2. Must take into account design differences between the patched
(authoritative, master) server and other (unpatched) authoritative,
*slave* DNS servers.
3. Must take into account design differences between the patched
(authoritative, master) server and other (unpatched) non-authoritative,
*caching* DNS servers.
For instance, some authoritative servers include the "authority records
section" for the zone along with any query. Some do not. BIND does.
Some authoritative servers also include "additional records section"
when they include the "authority records section". They put the IP
addresses of the authoritative name servers listed in the "authority
records section" in the additional section. BIND does.
Some caching servers remove the "additional records section" if it
exists. BIND does.
I think some caching servers also remove the "authority records
section". BIND does not.
There are other combinations, but the upshot is that other servers in
the middle (between the authoritative server which does the compiling
and the caching servers that MTAs use) add some information, some remove
information, some remove some information and add other information.
Some forcibly add information even if this forces TCP to become
necessary. Maybe some are more reasonable and don't do this.
Some servers do load balancing, and from a list of records remove some,
and implement some round-robin scheme to expose all records evenly. Eg.
if a caching server receives a list of 10MX's from an authoritative
server, it may present only 3 at a time to the client that did the
query, but it's a different 3 for each query. I do not know if any
server does load balancing on TXT records. It would seem dumb, but that
of course is not a guarantee.
Also, some servers do string compression, and reuse strings in the
datagram (there's a mechanism in the standard for this), but others
might not. It is not a required feature by RFC1035.
Some caching server might do an ANY query when they receive a TXT query.
This means that the answer space is now shared with the MX record
associated with the domain, the A record, and so on. I don't know if
this actually happens or if it is allowed by the RFC. I should look into it.
I think the key to finding the maximum number of bytes that can be
published as an SPF policy involves the following variables:
- how long is the host name that the TXT pertains to.
- how long is the longest possible "authoritative section" for this domain.
- how long is the longest possible "additional information section"
associated with the "authoritative section".
- what's the total length of the other non-SPF TXT records that the host
has.
I think the available space is (512 - sum of the above - DNS headers).
It it works out that this leaves a useless amount of space, the compiler
might just publish a record that requires TCP and give a warning to the
admin about this. In that case, it might as well put all the SPF
information in one TXT record.
Like I said, a lot more work might be needed to figure out this one.
Essentially you need a worst-case response packet builder routine.
But at least an SPF compiler has a much better chance at figuring out
all the variables and producing an SPF record short enough to get to the
destination reliably, guaranteed. Ie, a much better chance than a human
manually writing an arbitrarily long SPF record while ignoring the
variables involved in its transport from the authoritative server to the
destination. And with 512 bytes, there's not much room for error.
These are just some of the complications as I understand them, and I
don't understand all the workings of the DNS system. Perhaps reality is
not as complicated as I make it seem, but I just know one thing for sure
(I verified it myself): "Assumption is the mother of all fuck-ups".
Radu.
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: Need for Complexity in SPF Records, (continued)
- Re: Use of New Mask Mechanism, David MacQuigg
- Re: Use of New Mask Mechanism,
Radu Hociung <=
- Re: Re: DNS load research, Michael Hammer
- Re: Re: DNS load research, Radu Hociung
- RE: Re: DNS load research, Scott Kitterman
- Re: Re: DNS load research, Radu Hociung
- Re: Re: DNS load research, Leonard Mills
- Re: Re: DNS load research, Radu Hociung
- Re: Re: DNS load research, Andy Bakun
- Re: Re: DNS load research, Radu Hociung
- Re: Re: DNS load research, Andy Bakun
- Re: Re: DNS load research, Radu Hociung
|
|
|