ietf-mxcomp
[Top] [All Lists]

Re: Devilish: Forget about DNS

2004-02-09 12:57:27

I see several issues here:

I proposed to introduce a new RR with the first version of the
RMX draft in December 2002. Since then I'm drowning in mails like
"How can you dare...", "We don't want...", "Too many old DNS
servers, which can't be replaced..."

Three problems here:

(1) The zone file syntax might not be able to handle a new RR _name_ because of parser issues
(2) The zone file syntax might not be able to handle a numeric RR type
(3) A caching resolver might not be able to cache an unknown RR

Out of these (3) is broken software. (2) is problematic and (1) is something users have to manage.

, "Too expensive...",

Expensive?

"Too
much overhead...",

???

"Will take 10-20 years...", "We don't support
this...", "Proprietary DNS server, can't be extended...",

Well, NAPTR and SRV took not so many years, and in reality a new round of patches for Bind.

Remember that we talk about something which is to be visible in the global DNS, which normally public DNS software is in use. The proprietary stuff is normally hidden inside enterprises (or should be replaced anyway).

"Needs to
replace billions of DNS client softare..."

No. This I do not agree with. I have not seen any library which can not query for random RR types. Of course, programmers might be lazy and can not write their own code which parse a new RR type, but this is not an argument for me.

For example (good or bad example, you decide...) implementing ENUM using NAPTR as part of IOS in Cisco Routers was not a big deal from a DNS point of view.

and much, much more of
that.

Yes, a lot of people say these things....but I have not heard for example people in the DNSEXT wg say such things. They instead say things like "don't overload RR type values in the names (like SRV)" and "don't overload use in RR types (like TXT and NAPTR)".

The other problem is that many, many people complained that they
would need to change the firewall or even network structure because
records would grow beyond 512 bytes and require TCP queries.
As if many of today's DNS records wouldn't be longer than 512 bytes
anyway.

Correct. Two issues here:

(1) I see a larger risk the rr set expands above 512 bytes if we do NOT use a specialized RR type which minimize the size of the data, and the size of the RR set when sending a query. (2) If they have a firewall which doesn't allow DNS over TCP, they have other problems already.

But if we accept to query DNS records with TCP, why, after all, should
we bother to fetch all entries and to stitch information together from
differen TXT and A records or a new record type?

We don't want to use TCP.

HTTP is just perfect
for fetching a record of any data type and any length. And it exists.
No need to replace or update HTTP servers.

No, HTTP is not fun. Any implementation of HTTP is multiple degrees harder to implement that DNS. Have you read the spec?

All we need is to find the HTTP server which is competent to give the
answer. Finding the HTTP server is a DNS task, that's what DNS is
designed to do.

And a HTTP query is imho significantly better than trying to fetch
several records throuth DNS/TCP and trying to stitch them together
(and no way to trigger the DNS server to refetch missing records).

If you need to first use DNS, and then some other protocol, then this other protocol should be defined so it solves the problem which is to be solved. We should not get a bulldozer to try to catch flies.

See for example RFC 3205.

    paf


<Prev in Thread] Current Thread [Next in Thread>