ietf-mxcomp
[Top] [All Lists]

Re: CID sizes

2004-06-03 12:24:21


On 6/3/2004 1:31 PM, Greg Connor wrote:

Perhaps you could spend some time to rephrase the objections you have
in terms of what exactly will break, in what ways, and what will be the
side effects of that?

http://www.ietf.org/internet-drafts/draft-hall-dns-data-04.txt describes
many of the inter-related issues

At the architectural level, DNS is a lightweight lookup-by-name datagram
service that has to stay small and fast in order for its primary function
to actually function. DNS is like ARP for the Internet application space,
and how many people want to put XML documents into ARP messages? That's
basically what's being advocated.

In practical terms, stuffing excessive amounts of data into a highly-used
public service (as specifically opposed to a private service used within
your own network or domain) triggers scalability problems for everybody.

For the publisher, large messages require the use of TCP, which requires
the three-way handshake and teardown on each side of the simple exchange,
and can only serve a fraction of UDP queries with the same equipment.
Meanwhile, TCP 53 is not supported in many DNS implementations and
networks (including hotmail, noticably), so fallbacks are not guaranteed.

For the resolver (in this case, everybody that recieves an email that they
want to verify with MARID), large messages demand fallback processing
because the first lookup always fails. In many cases, a TC response will
cause the local resolver to issue its own fallback, meaning that the
caching infrastructure is made moot. Furthermore, running out of memory
causes existing data to be purged from the cache, and large numbers of
large records inevitably trigger constant cache churn (if your client
resolver is doing its own TC fallbacks then you're guaranteed 100% cache
churn). Considering the sizes we are talking about here, the workaround
will be 64-bit processors for every cache that does lookups so as to have
enough memory available to prevent constant churns [this is based on the
observation that the spamhaus blacklists require hundreds of MBs of cache
just for PTR data, and most large networks deal with at least that many
lookups on a routne basis].

Then people who actually own and operate the servers could make an
informed decision...

The zone publishers only represent a faction of the problem space. The
resolvers are the ones who pay the most for being forced into fallbacks,
maintaining large local caches, losing access to forwarders, etc.

-- 
Eric A. Hall                                        http://www.ehsco.com/
Internet Core Protocols          http://www.oreilly.com/catalog/coreprot/


<Prev in Thread] Current Thread [Next in Thread>