spf-discuss
[Top] [All Lists]

RE: RE: rr.com and SPF records

2005-03-21 12:05:35
Dave,

At 07:25 PM 3/20/2005, you wrote:
Alan,

Thanks for your description of DNS, and your willingness to help with my dumb questions. I guess we lost track of the question somewhere in this long thread. The question is -- Why doesn't SPF make more extensive use of the built-in recursion capability of DNS?

I suppose the short answer is that SPF is a set of agreed to rules between those who administer DNS for zones participating in SPF. These rules ride on top of the rules which make DNS servers operate. As William also noted in his reply (William thank you for your kind words), I am not sure that DNS really operates in the way you think it may.

I think that the longer answer comes from looking at what DNS does and why it is important not to cause harm to the way DNS itself works. In a world where everyone is honest, good and does the right thing, none of this would be a problem, but then again, in such a world we would also not need SPF.

DNS is really good at finding a specific resource records (RRs) for a zone it controls and returning those records back to the requesting party quickly. If you demand that the DNS server does more work at the remote side than providing simple answers to queries, it will slow down the performance of that server for its primary role - responding to requests for answers within the zones for which it is authoritative to other Internet DNS server questions.

Put another way, a requesting party's DNS server has (or should have) the same basic capabilities as the DNS server to which a request is being made. Since the requesting party is requiring more than one query to arrive at their final answer, what you seem to be asking is that the target DNS server does the requesting DNS server's work for them, thereby burdening the target with more work and work that it is perhaps not as good at doing though no fault of its own (yes, I am being deliberately obtuse and vague here).

I realize that that your question is in the spirit of making things go faster and more efficiently, but opening the door you seem to be pointing to also creates a possibility that other rather important things might not work so well.

DNS is a critical core service for the Internet, which is why so many are working to figure out how to strengthen it and secure it even better to prevent others from exploiting any DNS weaknesses in ways that may cause significant harm to the operation of the many other services that run on the Internet and depend upon DNS. Sometimes weaknesses can come from implementation and configuration rather than being any fault of code per se. Your suggestion, while clearly offered in the spirit of doing something right, might cause harm to the larger and arguably far more important cause of protecting the DNS infrastructure.

I echo William's suggestion that you examine the specification for DNS. It is really not all that hard to grasp once you read it a few times (ok, so it took more than a few times for me to have the vague grasp I may have and I actually had to sit down and write code to understand some basic truths that would perhaps be obvious to others, but...).

This question was motivated by the struggle I'm seeing over the question of how many DNS queries to allow. The rr.com example had difficulty fitting within the allowed 10 queries. Radu suggests "flattening" all records to just a list of IPs. That might be inconvenient for a domain that wishes to "delegate" all responsibility for these records to their subdomains.

While I appreciate the validity of Radu's suggestion, as it is the way we implement SPF here, this sort of thing may not work well in all environments. I think that the progenitors of SPF already understood that truth and accounted for it in the many environmental variations seen in SPFs current spec, which is likely why all the other options exist. Part of the problem here is that some site environments are a bit more dynamic in nature than others. Reasons for this might include growth or other business factors that make nailing to an IP or IP class a problem for maintenance of the records.

Consider that if one makes the SPF environment easier for administrators to adopt, they will be more likely to do so (human nature). Once they understand the value of SPF in their environment, I think that it is also true they will be more likely to want to maintain and tune their records more aggressively so as to make the implementation most efficient for their environments.

Perhaps in the case of RR, we have a day one issue that demands more aggressive administration requirements out the door, perhaps not. I don't have visibility or time to investigate that myself and I sincerely doubt the good folks at RR would even want my help nor would I deign to suggest unless asked, but even so, I think that there are acceptable answers to appropriately fit SPF records into nearly any environment.

Even if this turns out not to be the case here, I am certain that there are some on this list with the brain power to find an answer which enhances the current spec to handle even more environmental variations without breaking other things (I am also fairly sure that does not include me, though I try to offer what I can from an SPF record publisher's perspective).

The seemingly obvious solution is that rr.com answer all queries to nameservers in any of their subdomains, using the recursion mechanism in DNS. That way, the DNS records maintained by each subdomain can be very simple, and you don't run into any 10-query limit at rr.com. I must be missing something, because it seems too simple.

That may seem the case, but when your solutions dictates changing internal business rules for a company, the solution is less likely to be adopted. For whatever reason (probably part of which might be that they operate enormous chunks of IP address space), RR has chosen the environment in which they operate their DNS and it works for them. Asking any large ISP to change its established and working environment is probably not the mindset we want to have, rather looking at how the SPF solution can be successfully incorporated or adapted to work within their existing environmental infrastructures is perhaps better.

Large scale adopters like RR and other large ISPs and companies are the best chance of success for SPF's general adoption by the Internet community as a whole, so finding the least invasive ways to implement SPF to fit those larger adopters, seems to offer the best chance at success for SPF. One must look at the impact in one's own environment first, if it works, great, but then one should look at the overall scalability of the solution being implemented to ensure that as one's own environment grows, the solution is indeed scalable. I think that Meng and the early contributors to the SPF standard understood this and did this well. As with so many things, nothing is perfect, but it seems a solid first cut that is remarkably flexible in an environment where everyone seems to do things a bit differently.

Your description suggests that only the client's nameserver should do recursive queries. That would not accomplish the purpose of minimizing traffic across the Internet. On the other hand, I find on p.192 in Stevens - "most nameservers provide recursion, except some root servers". That makes sense, because you wouldn't want to tie up the .com server resolving queries for every sub.domain.com on the planet. It is quite reasonable, however, to tie up the rr.com nameserver in resolving queries for *any* of its subdomains. Better they do it than burden the client. Also, they only have to query their subdomains once a day, then they can provide answers directly out of their cache.

I suppose it is a matter of perspective. As to accomplishing "the purpose of minimizing traffic across the Internet", where is the minimizing of the traffic really coming from? The reduction in large volumes of faked email messages containing large images and other "payloads" or potential reductions in rather compact UDP packets for some environments? Obviously, it is always best to think about how to reduce traffic where possible, but only enough to make sure the job the traffic is being generated to accomplish is done properly.

With due respects to Mr. Stevens' work, all name servers following the standards for DNS have the *capability* to provide recursion, but very certainly, *not* all should *offer* this capability for general Internet access.

Checking DNS logs, you would be surprised to learn how many folks would love to use your name server to do all the resolving work for them in sending out large quantities of unsolicited email messages to other people so that their own ISP's DNS resolving name servers would not have visibility into their activity and thus deny the ISP any ability to act accordingly against their customer attempting to engage in that kind of behavior.

In fact, some very large and well organized operations of less that stellar repute run their own rogue DNS name servers to allow for exactly that sort of behavior. Some other folks actually have and maintain lists of such servers so as not to even respond to their requests. This is called DNS blackhole, something I personally don't care for because like so many other things, its proper use also implies the responsibility of properly managing it (with power comes responsibility). One wonders how much of the existing IP space is blocked inappropriately at various DNS sites because the original miscreants who caused the need for blocking have long since abandoned it, but I am digressing.

Allowing general access to DNS servers that guide local users out to the Internet, invites less honorable individuals or groups or perhaps even certain nations potential direct access to systems that could cause the caches (which William mentioned in his message) on those servers which make the DNS system work so well to get affected in negative ways. One really negative outcome I am tap dancing around is something called DNS cache poisoning. A very public example of that was demonstrated back around 1997 where an individual exploited a weakness in the system to effectively take control of substantial chunks of the Internet for several hours or more, thereby very graphically demonstrating the weakness. That event was a pretty darn big wake up call as to the importance of and need for security in DNS.

In today's environment, that kind of thing would be catastrophic to a great number of major industries who depend upon the Internet working as it should and generally does. Let's consider banks who are currently engaged in the very serious matter of keeping their clients and their funds safe in the Internet environment. Can you imagine the success levels the bad guys might have in simply redirecting LargeExampleBank.Com to the address of their choosing? For the bad guys, this is probably something like their own equivalent of the Holy Grail. If they were ever very successful in doing this, there would be much damage to the Internet's credibility as regards its ability to reliably handle any transactions. That blow might well even crush the perceived usefulness of the Internet for businesses as a common platform tool for transactional processing and thus end the general acceptance of the Internet. Personally, I think that is a bad thing and while I hate to bring up these "the sky is falling" scenarios, I suppose it is warranted to illustrate a point.

Given all the above and getting back to your question, a few extra reads done within a very efficient service designed to do just that sort of thing is not really a very high price to pay to avoid very bad things from happening in our world.

I hope my question ( or the source of my confusion ) is a little more clear now.

-- Dave


*************************************************************     *
* David MacQuigg, PhD              * email:  dmq'at'gci-net.com   *  *
* IC Design Engineer               * phone:  USA 520-721-4583  *  *  *
* Analog Design Methodologies                                  *  *  *
*                                  * 9320 East Mikelyn Lane     * * *
* VRS Consulting, P.C.             * Tucson, Arizona 85710        *
*************************************************************     *

Again, I hope I am clarifying my own remarks, answering your question and making some sense (and hopefully am not being too overly melodramatic).

Best,

Alan Maitland
WebMaster(_at_)Commerco(_dot_)Net
The Commerce Company - Making Commerce Simple(sm)
http://WWW.Commerco.Com/



<Prev in Thread] Current Thread [Next in Thread>