I think there are interesting things happening in DNS. I wrote a not very
good paper for AUUG a few years back noting an error rate in DNS above 10%
for the mirror site I do stats on.
Reviewing the figures for yesterday I get 9.75% unresolvable which is pretty
close to Bill Mannings figure.
But then I checked over the last 116 days since the start of the year. I find
that for a deployed site, logging IP and DNS name into CLF format (so I can
use analog) for ftp ,rsync and www I get:
avg=15.288431, lo=4.213000, hi=33.265000
I think Bill is saying what really exists in DNS. I'm saying what a box
deployed in the field can expect to see. Its pretty damn variable and its
a lot worse than the DNS records would themselves suggest. Remember that
DNS is timebound with timeouts in client code, uses UDP and is subject to
the same kind of loss issues as the general datapath.
(this is on a client base of around 3000 hosts, weighted for Australia/NZ)
George Michaelson | DSTC Pty Ltd
Email: ggm(_at_)dstc(_dot_)edu(_dot_)au | University of Qld 4072
Phone: +61 7 3365 4310 | Australia
Fax: +61 7 3365 4311 | http://www.dstc.edu.au