ietf
[Top] [All Lists]

Re: site local addresses (was Re: Fw: Welcome to the InterNAT...)

2003-03-27 16:47:00
You are mixing cause and effect. In IPv4 the vast majority of nodes
are limited to a single address at a time. 

Well, I don't know about windows boxes, but real operating systems have
supported virtual hosting in IPv4 for many years.  Having multiple
addresses on a node, even a node with a single network interface, is
nothing new.

Network managers that want some of their nodes in private space find
it simpler to also put the nodes with public access in the same space
rather than deploy multiple subnets to each office and route between
them. 

There are easier ways to solve the problem than having multiple subnets,
that doesn't require use of SL.  Any bit in the address can be used for
filtering, it doesn't have to be in the subnet mask.

During the IPV6 meeting in SF, we did discuss several options 
for limiting the use of site-local addressing, but all of 
those options had some sorts of problems associated with them.

So rather than address any problems, the needs of the (mostly absent)
network managers were simply dismissed as irrelevant.

Nope.  The problems couldn't be solved.  The group responsibly decided
to fix the architecture by deprecating site locals and to solve the
network managers' problems in other ways, because the solutions to the
network managers' problems without SLs are simpler than the solutions
to everyone's problems with SLs.  (especially because the latter do not
exist)

Site-local is nothing more than a well-known prefix for filtering.

No, it's more than that.  SLs impose burdens on hosts and apps.
SLs break the separation of function between apps and the network that
is inherent in the end-to-end principle.

(2) RFC 1918 addresses are most commonly used behind NATs. In 
this case, there is a middle box that performs translation of 
those addresses into global addresses at the site boundary, 
both in IP headers and at the application layer (through 
ALGs).  In IPv6, we hope to avoid NAT.  Site-local addresses 
were expected to be used on globally connected networks 
without any translation.

Why do you continue to equate SL with NAT?

Why do you continue to accuse people of equating SL with NAT even when
they carefully explain just how they are similar and how they are
different?

It doesn't require address selection rules or ALG's. The only way an
application should receive a AAAA record with a SL is if it is in the
same address zone as the target.

Standardizing split DNS is both insufficient and unacceptable.  

What we should not do is stick our heads in the sand and believe that
simply because we don't want to have limited scope addresses they will
magically disappear. 

What we should not do is stick our heads in the sand and believe that
simply because some sites will have limited scope addresses that it's
okay to burden hosts, DNS, routing, and large numbers of applications
with having to deal with them. 

Rather than force people to create a bunch of
ad-hoc solutions to the problem, we should in fact provide an
architected approach that creates a level of consistency (actually we
have, but some want to see it deprecated).

Actually we are working toward an architecture that provides a level of
consistency.  But this requires that we deprecate SL.

Exactly.  There are many of these applications defined within 
the IETF, by other standards bodies, and/or developed by 
private enterprises. In fact, the applications area folks 
assure me that there are more of these types of applications 
deployed than there are simple client- server applications 
(that was news to me).  IETF applications that fall into this 
category include FTP, SIP and (in some uses) HTTP.

And they will continue to fail when the network administrator puts in
routing filters, only nobody will be able to figure it out because we
removed the hint of a well-known prefix.

No, it will be easy to figure out, because it will be clear that the
network administrator is to blame,  unlike the current situation with
where the app vendor is blamed for the problems caused by the NAT.
This moves the problem to a place where it's more easily fixed.
This is a huge improvement.

Maybe...  There has been a great deal of reticence from 
application developers to rely on DNS look-ups for this type 
of referral, and it is not all based on DNS reliability. 
There are many, many nodes that either do not have a DNS 
entry or do not know their own DNS name, and many 
applications need to work on those nodes.

Rather than fix the problem, shoot the feature that exposes it ...

Rather than fix the problem, force another broken layer on every app. 
It won't solve anything but it will provide another layer of delay,
complexity, and unreliability.  The network will be even less functional
than it is today, but at least we'll have something to blame it on.

The borders exist. Either we create a tool that allows people to
easily manage which nodes are on which side, or we invite chaos.

The tools will be created.  But the borders don't have to be, and
shouldn't be, wired into the address.  We need more flexible
mechanisms than that.  And there needs to be a clear limit to apps'
responsibilities to deal with those borders when they are imposed.

Keith



<Prev in Thread] Current Thread [Next in Thread>