ietf
[Top] [All Lists]

Re: what the "scope" disagreement is about

2003-05-01 13:18:37
On Wed, Apr 30, 2003 at 01:31:50PM -0700, Tony Hain wrote:
The reason I say this is about reachability is that even with unique
addresses, applications will fail when they choose to pass 'an opaque
identifier' around, while they simultaneously assume that the content is
a valid topology locator at the receiver. The arguments claim the need
for opaqueness as an application simplifier on one hand, at the same
they insist that the topology match their perspective of a flat routing
space. The real network is not a single flat routing space.

And this is where we disagree.  For better or for worse, the market is
demanding that IP addresses that can be treated as belong to a single
flat routing space.  How else do you explain the demand for provider
independent addresses, and people punching holes in CIDR blocks so
they can have have multihoming support for reliable network service?

One solution for that would be to not do multihoming, and simply have
servers live on multiple IP addresses belonging to multiple ISP's, and
use multiple DNS 'A' records in order to provide reachability.  I
suspect that would be Tony's solution about what we should be doing
today.  This is perhaps workable for short-term http connections, but
it's absolutely no good for long-term TCP connections, which won't
survive a service outage since TCP commits the "sin" of using IP
addresses to identify its endpoints, instead of using DNS
addresses....  But whether this is the reason, or the whether there
are other reasons why the "solution" of killing off provider
independent addresses and letting the DNS sort it out has been
perceived as unacceptable, it's pretty clear that the market is
spoken.  Even as people have been wagging their fingers and saying
"horrible, horrible", customers are demanding it, and ISP's are
providing it.  This is the situation in IPv4, and I very much doubt
the situation is going to change much in IPv6.

It's certainly true that having a reliable end-point identifier is
critical.  But I don't think the DNS is it.  The DNS has been abused
in many different ways, and very often, thanks to split DNS games, and
CNAMES, and all the rest, the name which the user supplies to the
application is also not guaranteed to be a name which can be
utilizable by C when B wants to tell C to connect to A:

     ---- A ----
       |      I
       |      n
       |      t
       I      e ---- C
       2      r
       |      n
       |      e
       |      t
     ---- B ----

Tony is basically saying, "IP addresses don't work for this, so let's
bash application writers by saying they are broken, and tell them to
use DNS addresses instead".  Well, I'm here to point out that DNS
addresses don't work either.  Applications get names such as
"eddie.eecs", and even when they get a fully qualified domain name,
thanks to a very large amount of variability in how system
administrators have set up split-DNS, there is no gauarantee that a
particular DNS name is globally available, or even globally points at
the same end point.  So if IP addresses are not a flat routing space,
DNS names are not a flat naming space, either.  

I struggled for a while to come up with ways of coming up with a
"canonical DNS name" which could be passed around to multiple hosts
many years ago when I was trying to come up with a convenient way to
construct canonicalized, globally usable Kerberos principal names from
host specifiers that were supplied by the user on the command line.
We ran up against the same problem.  Fundamentally, the DNS wasn't and
isn't designed to do this.

Now, I suppose you could say that the people who "broke" DNS are
fault, but there are also people who would say that the people who
broken the flat routing space assumption (which while not universally
true was true enough for engineering purposes) are a fault instead.
Perhaps a more constructive thing to say is that the original Internet
architecture --- and here I mean everything in the entire protocol
stack, from link layer protocols to application level protocols ---
were not well engineered to meet the requirements that we see being
demanded of us today.

This is why I believe that ultimately 8+8 is the most interesting
approach.  As the old saw goes, "there is no problem in computer
science that cannot be solved by adding an additional level of
indirection".  

What we need is something that sits between DNS names and
provider-specific IP addresses.  That is a hole in the architecture
which today is being fixed by using provider-independent addresses,
much to the discomfort of router engineers.  Another solution, which
has been articulated by Tony, is that we should sweep all of this dirt
under the DNS carpet instead, and force the application writers to
retool all their implementations and protocols to pass DNS names
around instead.  But the DNS really isn't suited to handle this.  What
we need is something in-between.

                                                - Ted