ietf
[Top] [All Lists]

Re: persistent domain names

2001-10-30 16:50:02
On Tue, 30 Oct 2001 21:15:35 GMT, Zefram <zefram(_at_)fysh(_dot_)org>  said:
I'm looking for discussion of the problem more than the solution at this
stage; my I-D does outline a couple of possible solutions, but considering
the issues that have arisen already in respect of the problem statement,
solution finding will have to wait a bit.

Let's look at the major points:

DNSSEC - I re-read your draft several times, and I think what you're
trying to say is "DNSSEC authenticates the current value of the
mapping of 'www.foobar.com' to an A record", and doesn't address the
question of whether 'foobar.com' is the same company that you visited
2 months ago.  If so, you're correct in saying that it's working as
designed - DNSSEC is there to prove that the data you got is what
was actually sent out by the NS entry for the zone, and not modified
or forged en route.  The problem here is not DNSSEC, but unrealistic
expectations.

URI - we'll work with the ISSN example that you gave. Designing a DNS
that is fault-tolerant is well-understood (use multiple NS in different
AS, not all in the same /24 like certain famous sites did ;).  Therefor,
for this discussion, "if issn.org goes away that URN space is hosed".
Very true - but let's think a bit deeper.  "issn.org" is not likely
to go away unless the ISSN International Centre goes away - in which case
the ISSN system is in trouble anyhow.

A6 addresses - There's absolutely nothing new here, as this is the same
issue as has *always* existed whenever you contract DNS services to
a provider. "The NS entry has to point to a nameserver".  A little thought
would show that in fact, you do *NOT* necessarily want persistent names,
what you want is *continuous service*.  (Hint - think why CNAMEs exist
in the DNS at all...)

domain names in certificates, etc - That's what a CRL is for.
In addition, keeping a "fingerprint" of a certificate in order to verify
you're talking to the same entity as last time is a good approach.
The concept of "persistent" is probably a *bad* idea here - what if
"microsoft.com" had been persistent, and the Dept of Justice *had*
managed to force a breakup into 2 or 3 separate entities?

You then argue for the creation of a 'tech' domain, although this does
absolutely nothing for the DNSSEC and A6 address issue.  In 4.4.7 you
discuss namespace bloat, but it's unclear whether 

   the enormous size of the current gTLDs is largely attributable to
   many organisations registering huge numbers of domain names each.

is actually true or not.  Are there numbers to back this up?  I thought
a large part of the bloat was due to everybody who has a web browser and
$15 to spare registering 'joes-pet-frogs-irving-and-thaddeus.com'.

Trying to make the namespace more heirarchical has hazards - see the .US
domain for an object lesson.

Section 4.4.8 asks if "some attempt to make domain names resemble
organisations' common names".  This is a Very Bad Idea, because *that*
was what led to the trademark wars mentioned in 4.4.6.

The discussion of MIME types is a total red herring.  Count the number
of registered vnd.* MIME subtypes, and compare that to the number of
*.COM domains.  And the vnd.* system has the same problem as the ISSN
example - if the "responsible organization" goes away, you end up with
a crufty registry.

And therein lies the basic problem - you say "persistent the same
way that image/vnd.xiff is persistent" - while glossing over the
fact that the 'xiff' registry is not *truly* permanent either.

To be sure, there *are* some major issues:

1) Totally broken conflict resolution for trademark disputes.
2) identifying that a given domain is in fact still the same one.
3) User expectations.
4) Some protocols *do* expect a long-term stable definition.

Most of these issues are already addressable with current technology,
and the proper choice for the rest is *not* to try to mandate persistent
identifiers, but to think about ways to design things that don't break.

Consider the DNS Survey (http://www.isc.org/ds/) - it was being
run every 6 months, and taking multiple weeks to complete.  And the Internet
was growing fast enough that there was a measurable delta *while the
survey ran*.  

                                Valdis Kletnieks
                                Operating Systems Analyst
                                Virginia Tech



<Prev in Thread] Current Thread [Next in Thread>