On Fri, Jan 7, 2011 at 8:19 AM, Chris Lewis <clewis(_at_)nortel(_dot_)com>
It's nice to see public experiments in IPv6 DNSBLs, but, I really have to
wonder about the usefulness of it at the present time. Of the hundreds of
DNSBLs that exist, only a dozen or two block that much on their own. Couple
that with the dearth of IPv6 MTAs, I can't help thinking that such DNSBLs
are essentially useless except as proof-of-concept.
Even having proof-of-concepts is valuable -- otherwise, everybody will
wait for everybody else. Chicken, meet Egg :)
There has been quite some discussion on whether (more-or-less-)
arbitrary limits should be chosen ("only allow /64" etc), and some
proposals have appeared.
I now implemented an extremely simple proposal (note that this is
experimental only and not useful for production purposes). The basic
idea is that the server will *always* return multiple answers (up to
1) The listed entry "below" the one requested
2) The listed entry "above" the one requested
3) If available, a "matching" entry
If two of the above overlap, only one will be returned. In general,
always at least two RRs should be returned. Each of the responses will
have a netmask, and additional data (ie, trust levels, reputation
With that information, a "intelligent" application / query library can
greatly reduce the number of lookups required (eg when a spammer
enumerates all IPs in a IPv6-/64). On the other hand, this makes it
mathematically feasible to enumerate a complete DNSxL zone -- but I do
not consider this as an issue, at least not for dnswl.org.
The experimental server supports TXT and SRV records with essentially
the same information. It is single-threaded and not very efficiently
coded. But please feel free to test it:
:~> dig @ns-exp1.dnswl.org -t txt 184.108.40.206.exp.dnswl.org
;; ANSWER SECTION:
220.127.116.11.exp.dnswl.org. 3600 IN TXT "32" "10" "0"
18.104.22.168.exp.dnswl.org. 3600 IN TXT "32" "8" "2"
254.254.254.127.exp.dnswl.org. 3600 IN TXT "32" "10" "0"
:~> dig @ns-exp1.dnswl.org -t srv 22.214.171.124.exp.dnswl.org
;; ANSWER SECTION:
126.96.36.199.exp.dnswl.org. 3600 IN SRV 32 10 0 188.8.131.52.exp.dnswl.org.
184.108.40.206.exp.dnswl.org. 3600 IN SRV 32 8 2
220.127.116.11.exp.dnswl.org. 3600 IN SRV 32 10 0
The information for the two identical examples above is to be
interpreted as follows:
* Request for 127.0.0.2
* 127.0.0.2 has an exact match in the first of the three return RRs.
* 18.104.22.168/32 is the next entry "below", with category 8 and trustlevel 2
* 127.254.254.254/32 is the next entry "above" (a further test entry),
with cat 10 and trust 0
:~> dig @ns-exp1.dnswl.org -t srv 254.5.210.128.exp.dnswl.org
254.5.210.128.exp.dnswl.org. 3600 IN SRV 24 11 2
254.5.210.128.exp.dnswl.org. 3600 IN SRV 32 11 2
* Request 22.214.171.124
* 126.96.36.199 is contained in the range 188.8.131.52/24, category 11, trust 2
* Next entry "above" is 184.108.40.206/32, cat 11, trust 2
* No next entry "below" is listed, because the range already serves
The list has two synthetical records to indicate the lower and upper bands:
:~> dig @ns-exp1.dnswl.org -t srv 0.0.0.0.exp.dnswl.org
:~> dig @ns-exp1.dnswl.org -t srv 255.255.255.255.exp.dnswl.org
Since I just hacked this together using existing Perl modules, the
data is only in IPv4, but the concept can easily be extended to IPv6
payload. This server will serve current dnswl.org data.
This approach has a number of benefits:
* No arbitrary cut-points
* Enables "intelligent" applications to avoid a lot of unnecessary requests
* Does require an application to walk down the tree from a root point
(and does not require DNSxL operators to artificially create such a
tree with synthesized records)
* I don't know of a way to serve this off of a generic nameserver
(other than that nameserver being proxy to a "hidden master").
* DNSxL operators have less valuable logs because less specific
traffic hits their infrastructure.
I approached naturally from a whitelist point-of-view, so someone with
more experience in blacklists may have different approaches.
Rsync's batch-ish nature isn't quite why I mentioned latency. The latency
is zone rebuild time at the DNSBL server. With a zone file as large as the
XBL, say, even if you did some sort of incremental update directly off a
core DBMS in realish-time, I think you'd run into performance problems.
The latency is also in TTL. The lower the latency requirement, the
less is DNS the right tool for the job.
Asrg mailing list