ietf
[Top] [All Lists]

Re: the names that aren't DNS names problem, was Last Call: <draft-ietf-dnsop-onion-tld-00.txt>

2015-07-22 15:22:47
Steve,

Quite a bit of background material and comments follow but, if
you (or others) are in a hurry, please skim to the bottom of the
note which contains a conclusion and recommendations.

--On Tuesday, July 21, 2015 12:34 +0200 Steve Crocker
<steve(_dot_)crocker(_at_)icann(_dot_)org> wrote:

John, et al,

There are a substantial number of ICANN people at this IETF
meeting, including, of course, the usual IANA team,  three  of
ICANN's to level managers — David Conrad, Chief Technology
Officer, Ashwin Rangun, Chief Innovation and Information
Officer, and Akram Atallah, president of the Global Domains
Division — several people on David Conrad's team, and four
people on the ICANN board, including, of course, the IETF
liaison to the ICANN board, Jonne Soininen, and Suzanne Woolf,
who serves multiply as the liaison to the ICANN board from the
root server operators group, RSSAC, chair of DNSOP, and a
member of the IAB. ICANN is paying a LOT of attention.

Apologies for probably appearing cynical, but I think everyone
reading this has had experiences in which "attended a meeting",
especially "attending a meeting with full organizational support
and presumably at organizational request" does not equate to
"paying attention".  I assume that everyone you've listed _is_
paying attention and acting in good faith, but that doesn't
follow from your list and assertion.  Be that as it may...

Speaking for myself and not necessarily for the ICANN board or
the rest of the organization, it seems evident that the nice
clean separation of name spaces originally envisioned via the
distinct indicators in DNS, e.g. "IN" for "Internet",
protocol identifiers in URLs, etc. has not worked out in
practice.  The original scheme of assigning just seven
"generic" top level domains plus two letter country code
TLDs meant the rest of the top level space was left
unassigned.

Not exactly.  In retrospect, we should have said something much
more explicit in RFC 1591, but there was a quite intentional
model that would have prevented the present problems.  That
model can be summarized as:

 * Single-letter TLDs are prohibited, partially because
        of the security- or confusion-related risks of
        single-character typing errors.  There was also some
        joking about a future extension model if things got out
        of hand, but I don't believe that was ever taken
        seriously.
 * Two-letter TLDs were reserved for ISO 3166 (later
        3166-1) alpha-2 codes.  That was, by the way, _all_ two
        letter codes on the assumption that, over time, the ISO
        Technical Committee responsible for 3166 might changes
        their minds about categories of codes as well as
        assigning new ones and that therefore only the "two
        octets" part could be treated as immutable.
 * Three-letter generic domains.  Six, and then seven, of
        them, with the assumption that more might be assigned
        but that (i) additional ones would still be three
        letters and (ii) that any applications for new ones
        would need to show strong reasons why intelligent use of
        hierarchy would not constitute as good or better a
        solution.   Just for perspective, we came very close to
        allocating an eighth generic TLD in the mid-1990s.  I
        believe, based on conversations with Jon, that it would
        have been approved and delegated had the applicant been
        able to identify sufficient consensus around a
        well-established document or standard that would
        determine what was an appropriate second-level entry and
        what was not.
 * Four-letter codes consisted for "ARPA" with everything
        else being reserved.  I believe that additional
        four-letter infrastructure, management, or transition
        TLDs would have been allocated had a need been
        demonstrated, but that never happened and, unlike issues
        with possible expansion of two and three character
        country-related and generic domains, I can't remember
        its being discussed.
 * Any longer strings were informally, but quite
        explicitly, reserved for precisely the kind of local use
        and/or resolution by mechanisms that don't involve the
        public DNS root that we seem to be talking about today.

ICANN could have insisted on three-character mnemonics but
delegated all of the functional domain ideas in the 2001 set
without altering those rules.  ICANN leadership at the time was
told, not only about the rules, but that longer names were
likely to cause problems and that, even if they allowed such
names, they should at least warn applicants of the issues.  They
declined to either impose the restriction or warn applicants.  I
gather the consequences of those decisions are still a matter
for "universal acceptance" discussions today.

Once the above conventions became obsolete, the issues we are
dealing with now as "Special Names" became inevitable.

 Nature apparently abhors a vacuum in this area as
well as in the physical domain.  Various vendors grabbed
unused names such as local, corp, mail and built then into
their products.  In principle, these names should not have
shown up in queries to the DNS root; in practice they have
shown up in great numbers.  Developers of new protocols have
also felt comfortable using previously unused top level names,
with onion being the example getting the most attention right
now, but with several others previously used and more to come.

Understood and see below.  However, note that every name you
list as an example is four or more characters long.  Had we
stuck with the original rules, we could have made a
recommendation similar to that for private-use IP addresses,
i.e., to block them in DNS servers, thereby eliminating the
problem.  I might also suggest that we could have done a better
job of educating vendors and others about the use of hierarchy
and maybe allocated something like "priv." and established an
FCFS registry for second-level names consistent with
non-public-DNS use, but that would probably have been too much
to ask for.  Too late now, obviously.

Meanwhile, one of the goals included in ICANN's formation
was increasing competition and choice.  (Don't blame me; I
wasn't involved at the time.)

I was... don't blame me either, especially for the way that
"competition and choice" ended up being interpreted.  In
particular, for the case mentioned above, I believed then and
believe now that there should have been an explicit discussion
--within the ICANN community and with the IETF-- about the
tradeoffs between continuing to reserve longer top-level names
for other than public DNS use.   For whatever reasons, those
discussions never occurred.

   The first result was the
creation of the registrar system, which resulted in a dramatic
drop in the price of domain names.  The second result, which
has taken quite a bit longer, was the opening up of the top
level domain space, which brings us to where we are today.

And, fwiw, "where we are today" is with no mechanism or set of
agreements for dealing with individual strings that ought to be
reserved and kept out of any possible gTLD allocation or
delegation processes.  I don't think that situation reflects
very well on either the IETF or ICANN.

Irrespective of the original intent to keep various name
spaces separate, I think we have to accept that these name
spaces bleed into each other.

Agreed.

 Once we accept that, to me,
fairly obvious fact of life, the next step is to work out some
straightforward coordination between the IETF's processes
and ICANN's processes.  I don't see why it should be hard
or lengthy to do so.

I agree that it should not be either hard or lengthy although
I've been surprised and disappointed by the outcome after
similar beliefs in the past.  However, let me repeat and
summarize my recommendation from an earlier note or two.  

(1) First, I think we (IETF and ICANN) should agree that this is
ultimately an ICANN problem and within ICANN's authority.  I
have multiple reasons for that preference but the bottom line is
that only ICANN can control ICANN decisions about what TLDs to
actually allocate and delegate.

(2) For special name requests/ reservations that fall within the
scope of IETF protocol work, the IETF should request that ICANN
allocate and reserve the relevant names.  We should agree that
ICANN will either do that or explain why and, if appropriate,
suggest alternate names and/or mechanisms.  Working out a
definition of "the scope of IETF protocol work" is an IETF
problem but will definitely include the needs of IETF
standards-track protocols.  The IETF is expected to use
hierarchy, rather than separate TLD-like names, whenever
possible.

(3) For special name requests of reservations that are not
deemed by the IETF to fall within the scope of its protocol
work, there should be a process of requesting of ICANN that
names be reserved.  

(4) Determining the ICANN process for evaluating name requests
under (2) or (3) is an ICANN matter, not an IETF one.  My
personal recommendation is that there be assurances that it will
operate in a timely fashion and that decisions about these names
be viewed more as a technical security and stability issue than,
e.g., as part of the current or future gTLD application and
allocation process.

(5) If ICANN discovers, contrary to our mutual prediction that
this should not be hard or lengthy, that getting appropriate
procedures in place is (calendar) time-consuming, I would hope
that ICANN could temporarily delegate actual decision authority
to the IETF and that the IETF would be at least as reasonable
and careful in its consideration of the many issues involved as
it has been so far with ONION.

best,
    john