Thanks Ned. Excellent info and insight.
I do have a few follow-up questions related this:
2) Use Multiple DNS server.
An IBM solution was to suggest to make sure you have
additional DNS servers to query.
Well, sure. Having at least one secondary DNS server is pretty much
and they need to be geographically separate. I note in passing that
interlink.com has two servers and they appear to be on separate networks.
ns5.ecsecure.com. [22.214.171.124] [TTL=86400]
ns6.ecsecure.com. [126.96.36.199] [TTL=86400]
One of the confusing issues about this, and no doubt probably a
misunderstanding on my part, is related to having multiple DNS servers vs
Primary DNS recursion lookups.
First a disclaimer: DNS operations are not exactly my primary area of expertise
either. Hopefully what I say here will be correct and if it isn't hopefully
someone else will chime in and correct me.
With that said...
Be very careful here with your terminology. In the DNS world a "primary" server
is one that provides authoritative information for one or more domains. A
"secondary" is a slave that periodically transfers information from the
primary and makes it available for queries.
The servers for a given domain are specified by NS records in the "upper"
domain. So the way this works is that a resolver starts at the top of the tree
and walks down using NS records at each level to find the servers below. If
there are multiple NS records I believe the approach is to pick one at random
and if that doesn't work try another. This may also depend on the resolver
Caching of course eliminates the need for many of these queries and makes the
load at the upper levels manageable. There's also a bunch of tricky stuff done
at the very top to make things sufficiently performant while allowing multiple
providers - I know very little about all this magic.
How do I best ask this because again, I am not a DNS admin or a server
Well, I guess I qualify as an admin since I handle a bunch of primary and
secondary domains. But again, I'm no expert.
I guess the question is, can the same results be expected with:
1) A server with multiple uplinks, versus
I'm afraid this exceeds my level of expertise. I use bind but I don't have a
multihomed environment so I don't know how it or any other server
implementation handles being multihomed.
2) Multiple Server list
Applications typically don't have a full resolver built in that's capable of
walking the DNS tree. Rather, they have a so-called "stub" resolver that is
given a list of full resolvers to send queries to. The stub resolver builds a
query and sends it to one of the resolvers, gets back a result and decodes it.
I guess your statement above about having geographically separate servers
makes all this work better to increase the odds of getting result.
Right. Of course geographic separation doesn't matter when the problem is that
a server is down, but it can save you when a link is down.
But it was my impression that when you query a primary server, if the query
is not available in the zone and not currently cached, that the server will
query its uplinks. No?
Your terminology is confusing here. I think by "primary server" you mean "full
resolver" and by "uplink" you mean "servers for uplevel domains". If so, then
yes, this is more or less how it works, but it works its way down from the
lowest uplevel entry that the resolver has cached.
You see, for my company SMTP server, I have:
188.8.131.52 is where I have my ns.santronics.com primary DNS server,
OK, but that has nothing to do with whether or not you're using that machine as
your full resolver. For example, the primary server for mrochek.com is
mauve.mrochek.com, but DNS queries on that machine are actually forwarded to a
completely different system for resolution. I believe it's considered good
practice not to have your DNS primaries or secondaries performing general DNS
and I have as forwarders the UUNET servers:
I had the impression this provided the uplink queries when the primary did
not have the information.
Maybe. It would depend on how you have things configured. Forwarders are
basiclaly used to offload DNS processing from one machine to another.
I just happen to see this SERVFAIL fail when I was testing this customer's
db.usinterlink.com MX record against 184.108.40.206 via Window's
I was assisting him remotely from home and didn't see this SERVFAIL against
the bellsouth.net server:
NSLOOKUP -query=mx -debug db.usinterlink.com ns.santronics.com
NSLOOKUP -query=mx -debug db.usinterlink.com dns.msy.bellsouth.net
First one returns SERVFAIL, second one NOERROR.
You might consider clearing the cache on your home server and see if that
I was able to send him a test message because the SMTP server was finally
able to get to the second unnet server, and thus fallback to a successful A
But the situation got me wondering what was wrong or different between the
two, and also what if I or other customers didn't have a second DNS server
setup for SMTP, if its something to worry about.
4) Ignore SERVFAIL?
Some just said that the SMTP client should be looking at SERVFAIL as a
Bad idea IMO. Configuration glitches happen, and when they do you
don't want to bounce mail to the domain unnecessarily. Most of the
time these problems get fixed and the mail goes on through with
only a small delay.
I agree. I was wondering, and now realize that its probably wrong to jump
the gun with this, if it would make sense to do a A record lookup for a
I've actually seen cases where MX record queries got a SERVFAIL but an A record
query got a successful result. (I've always believed this is due to the server
being down or misconfigured and having one record type cached but not the
other, but I've never been able to track down the specific cause.) This stuff
gets very complex because DNS servers do tricky stuff like piggyback A record
information on MX queries, making problems hard to isolate.
But this doesn't mean that you should do this: Nothing prevents someone from
having an MX for foo.example.com pointing to a completely separate
mail.example.org while having a SMTP server running on foo.example.com that
silently eats everything sent to it. And yes, such a setup would be stupid and
dangerous, but people do stupid and dangerous things all the time.