Margaret Wasserman wrote:
[Folks who are not interested in the details of the IPv6 WGs
discussion of site-local addressing can just hit 'd' now.]
Still true. For the record, I agree that debate on this SL issue should
be on the WG list.
Hi Tony,
There is a lot of noise about treating SL special, but as
you note an
application can ignore that a 1918 address is somehow different from
any other address. If an application were to do the same and
just use a
SL as any other address, it will work just fine until one of the
participants is on the outside of the filtering router (also
true for
IPv4 w/1918).
This paragraph ignores the fact that site-local addresses in
IPv6 were intended to be used quite differently from RFC 1918
addresses in IPv4. The two biggest differences are:
(1) In IPv4, RFC 1918 addresses are used for isolated
networks and/or networks behind NAT boxes. So, it is
uncommon for a single node to have both RFC 1918 addresses
and IPv4 global addresses, and there is typically no need for
applications to choose between them (either for connection
establishment, or when deciding what addresses to transmit to
peers, etc.). However, in IPv6, it was expected that many
nodes would have both global and site-local addresses, and
that site-local addresses would be used on globally-connected
networks.
You are mixing cause and effect. In IPv4 the vast majority of nodes are
limited to a single address at a time. Network managers that want some
of their nodes in private space find it simpler to also put the nodes
with public access in the same space rather than deploy multiple subnets
to each office and route between them. With SL, we fixed that so the
private and public nodes can share the same segment and talk to each
other without requiring a router.
During the IPV6 meeting in SF, we did discuss several options
for limiting the use of site-local addressing, but all of
those options had some sorts of problems associated with them.
So rather than address any problems, the needs of the (mostly absent)
network managers were simply dismissed as irrelevant. Site-local is
nothing more than a well-known prefix for filtering. Filtering will
happen in any case, so even if you don't call it that, there will be
addresses with a scope of applicability that is limited to the network
managers perception of his site.
(2) RFC 1918 addresses are most commonly used behind NATs. In
this case, there is a middle box that performs translation of
those addresses into global addresses at the site boundary,
both in IP headers and at the application layer (through
ALGs). In IPv6, we hope to avoid NAT. Site-local addresses
were expected to be used on globally connected networks
without any translation.
Why do you continue to equate SL with NAT?
Packets to/from site-locals (in the IP headers) would have
been dropped, but there was no explanation of how leaking of
IPv6 site-local addresses at the application level would have
been prevented at site borders. One solution would have been
to use some ALG-equivalents in site-border routers that
either translated or dropped the traffic, but that wasn't really
acceptable. Instead,
the assumption seemed to be that applications just wouldn't
include site-local addresses at the application layer in any
packets that might go outside of the site. This required
some sort of address selection logic in applications.
It doesn't require address selection rules or ALG's. The only way an
application should receive a AAAA record with a SL is if it is in the
same address zone as the target. The continual FUD about leaking at the
borders will not be resolved by changing the prefix. Filtering will
exist, prefixes will have a limited zone of applicability, and
applications that insist on passing addresses will leak those in a way
that causes application failure.
If one believes that a split-DNS is reasonable to build and deploy
(since many exist it seems self evident),
There is a difference between believing that split DNS is
reasonable to deploy and believing that it is a good
architecture for the Internet. While folks may be quite
capable of deploying split DNS, it results in a more brittle
and complex Internet architecture, and I don't think that we
should build an IPv6 addressing architecture that requires it.
What we should not do is stick our heads in the sand and believe that
simply because we don't want to have limited scope addresses they will
magically disappear. Rather than force people to create a bunch of
ad-hoc solutions to the problem, we should in fact provide an
architected approach that creates a level of consistency (actually we
have, but some want to see it deprecated).
The place SL starts to have trouble is when a multi-party app does
referrals based on obtained addresses rather than names.
Since the app
can't know which parties are on which side of the routing filter, it
can't pass the correct addresses around.
Exactly. There are many of these applications defined within
the IETF, by other standards bodies, and/or developed by
private enterprises. In fact, the applications area folks
assure me that there are more of these types of applications
deployed than there are simple client- server applications
(that was news to me). IETF applications that fall into this
category include FTP, SIP and (in some uses) HTTP.
And they will continue to fail when the network administrator puts in
routing filters, only nobody will be able to figure it out because we
removed the hint of a well-known prefix.
(One could argue that if it
passed a name then the split-DNS would return the correct address to
the querier based on his location, but that frequently gets shouted
down based on unreliability of DNS)
Maybe... There has been a great deal of reticence from
application developers to rely on DNS look-ups for this type
of referral, and it is not all based on DNS reliability.
There are many, many nodes that either do not have a DNS
entry or do not know their own DNS name, and many
applications need to work on those nodes.
Rather than fix the problem, shoot the feature that exposes it ...
It is also possible that one of the
participants is only accessible via the private address
space, so there
is a failure mode where some participants can see each other while
others can't. This will always be true, and has nothing to
do with the
well-known prefix FEC0::.
True. Any firewall can create this situation. I do
understand that folks will use firewalls and create private
address spaces in IPv6. Hey, I even think that folks may
deploy IPv6<=>IPv6 NAT, because there is really nothing we
can do to stop them.
However, I don't think that we should design these sorts of
borders into the IPv6 addressing architecture.
The borders exist. Either we create a tool that allows people to easily
manage which nodes are on which side, or we invite chaos. Chaos existed
before private space in IPv4, and if we remove SL, it will return to
IPv6.
As you know, I was in favor of setting aside a prefix
(FECO::, in fact) for use as private address space (either on
disconnected networks, or behind NATs), but the consensus of
the folks in the IPv6 WG meeting was to deprecate that prefix
altogether. There were several compelling arguments from
operators and others that we don't need a special prefix for
disconnected sites... Disconnected sites will need to
renumber when the get globally connected, anyway, and ISPs
already have the proper filtering in place to prevent
mistakes from affecting the global routing tables, etc.
Sites would not need to renumber internal use nodes when connecting, so
that is a bogus argument. Also, if ISP's have such spectacular filtering
in place, why do we continue to periodically see 1918 space in the
global announcements? The fact that it is an identifiable space that
everyone knows to filter limits the damage, so how does one detect the
failure of filters for prefixes that aren't well-known?
One reason that some people like private space is that they
don't have
to expose to their competitors what internal networks they are
deploying and which office is coordinating that. If they are
suddenly
required to register for public space for every internal use
network,
they are more likely to pick random numbers than tip of the
competition. What they want is space that for all intents
and purposes
to apps looks like global space, but they don't have to
expose it, know
it will be filtered at the border, and backed up by a filter at the
ISP. So for these purposes there is no need to treat SL as a special
case.
For this purpose, there is no need to have site-locals at all...
The organization can just get a /48 from a provider,
In who's imaginary universe? Unless they are a customer this won't
happen.
ask the
provider to route part, all or none of it, and use any
private (non-routed) parts however they like.
Refer to previous question about detecting failures in route filters.
I doubt that
there are many organizations (other than the NSA, perhaps)
that are afraid that their competition will know how many
subnets they have at the granularity of 2^16... (Ooh, IBM
just asked for another /48 from their provider, what can we
infer from that?)
If that happens to be from a small sales office, one could infer that
they will be moving a major development effort there. If it happens to
be in a country they currently don't have developers in, one could infer
they are shifting people around. Just because it doesn't make sense to
you, doesn't mean it is an invalid concern by the managers of real
networks out there.
We need to get past the arguments that private space == nat, because
use of private space predates nat, and its only relationship
is that it
facilitated nat as an address preservation tool.
I don't think that anyone is making the argument that private
space == NAT.
You did in (2) above. Private space is about filtering, and filtering
will exist no matter what the IETF does. We can either put the private
space in an identifiable place, or pay the price of the resulting chaos.
Tony