On woensdag, apr 30, 2003, at 00:41 Europe/Amsterdam, Peter Deutsch
wrote:
How is answering any of these questions going to help us? "Oh, NATed
traffic is 7% and not 5%! You are right we should have site local
addresses then!"
Well, I've a couple of responses to that.
One, of course is that if NAT traffic is really only 5%, and it happens
to be falling over time, then we should maybe stop spending so much
time
arguing about it. :-)
Either we must support NAT or we don't. The actual usage of NAT at any
given point of time really doesn't tell us which of those is the more
reasonable position. It is not unheard of to support something that
only 5% or less of the users need.
(Note, if I really wanted to be excommunicated from this list, I could
have done a s/NAT/IPv6/ on the above paragraph ;-)
By this reasoning there is no point in inventing something new because
at the time of invention deployment is by definition 0.
IPv4 address space is running out. So the IETF did the responsible
thing and developed a new protocol that wouldn't have this problem. If
people then fail to adopt this protocol, well, that's their problem.
Here's a quick sampling of statements presented over the past few days
to support the two sides of the debate
- "there are more rfc1918 end hosts than there are non-rfc1918
end hosts,
Ok, this sounds like hyperbole to me.
and the uses being made of rfc1918 aren't limited
to "couldn't get enough unique address space" situations."
- "NAT is most often used to extend a single address to
cover multiple systems in a home or small office
environment. For that environment, an IPv6 /48
(without site-locals) would suffice to replace NAT."
- "NAT is used almost everywhere (perhaps outside the US)
- almost everywhere."
Now these are all very interesting but what does it prove? Yes, we know
NAT _is_ being used in IPv4 and yes, we know you _can_ do without it
given enough public address space.
- "When you are talking about tens of millions of customers,
it's not feasible to give each a subnet even if you have
the space to do it. IMHO, most customers will be placed
on a /64 that's shared across hundreds of customers, similar
to IPv4 common practice."
This is actually a good point, although this may not seem obvious to
non-ops people. Today, dial-up concentrators usually have an address
range that it used to assign addresses to people that dial in. That
means at most a handful of routes per dial-up concentrator in the
interior routing protocol. If everyone has their own /48, that means a
route in the IGP for each customer that's online. There are no hard and
fast rules about how many routes you can have in an IGP, but somewhere
between 10k and 1M you run into trouble. I hate to offer opinions
without facts to back them up, but I'm pretty sure there are dial-up
ISPs with closer to 1M than 10k ports.
However, assigning each customer an IPv6 /64 the same way they now get
an IPv4 /32 should be workable.
- "While it is true that address conservation isn't that
significant an issue, the whole issue of provider
independence (or lack thereof) continues to exist."
What is your problem with this? Seems like a valid statement to me.
- "One of the biggest costs in renumbering is the
disruption it causes. The actual cost of editing
the files, etc, is trivial by comparison."
Some figures would be good here, but from my experience this doesn't
sound too far off.
- "Even RFC1918 addresses get connected sometimes through
corporate mergers. It would work better if organizations
would choose a random set of subnets from 10/8, so the
chance of overlap is minimized."
Ok, this is something I'd like to see. My gut says this is too simple,
but maybe not...
- "They [ISPs] charge extra for two hosts because it
is assumed two hosts consume more bits than one, not
because a second IPv4 address is hard to come by."
This one would also be interesting to put into numbers. If this is
really true we can expect a /64 to be 2^64 times as expensive as a /32
in IPv4...
Now, it may be true that *all* of the above statements are true, but
they don't help me because I can't tell, from the statements alone,
which ones are actually *relevant* to the debate.
I have to agree with you there.
So, just as in your
somewhat offhand use of "7% versus 5%" above, it all seems a little bit
like the statement that "46 percent of all statistics are made up".
The 7% and 5% are definately made up.
The current state of IPv6 deployment has very little bearing on future
IPv6 deployment. Just look back 20 years in IPv4. I don't think any of
the problems we have today could have been predicted by looking at the
network then.
Well, this is a good example where something may be true, but not
relevant.
The statement above may not be directly relevant to the discussion, but
it should disprove your assumption that IPv6 deployment today is.
If, after all this "Sturm und Drang", IPv6 is not yet growing
faster than overall network traffic, those of us who remember X.500 may
conclude that it's not going to be relevant to me within an event
horizon that I need to care about, so I might conclude that I can
ignore
it (and by extension, arguments predicated on its success).
At some point this argument is going to stick, but at this point it is
too soon to tell whether current low levels of v6 deployment are
because everyone is taking their time or because it's never going to
amount to anything.
Now, this
may get me excommunicated from this list, but it may be the right thing
for me to do. How can I tell from this thread? I could be persuaded to
pay attention to such arguments by some numbers.
When I connected to the 6bone a year or two ago, all I could do over
IPv6 was traceroute from a server. Today, I browse the web on a fully
IPv6-enabled host with mostly IPv6-enabled software so I get to see
some actual web content over IPv6. Also, I regularly use IPv6 for other
applications. It's still not much compared to IPv4, but the progress
has been remarkable.
Another observation, there appears to be at least two major schools of
philosophy at work here (time for another random analogy for you
Science
majors to go look up on Google... ;-)
Maybe you should pick the rationalists versus the empiricists for your
next example...
Now, if "there are more rfc1918 end hosts than there are non-rfc1918
end
hosts" and "NAT is most often used to extend a single address to cover
multiple systems in a home or small office" then I might be inclined to
give more weight to arguments which make the network a better place for
the Thoreaus of the world (or is that "Bllom County"?), but if the most
important issues for our future are really going to be about supporting
large campus installations, it might drive the trade-offs in a
different
direction. There Are No Free Lunches, so knowing what the target is
might help us all collectively take better aim....
If I buy ADSL or cable service I get a single IPv4 address unless I pay
a lot extra. Since I have more than one computer, I am forced to use
RFC 1918 space. I wouldn't want anyone to pervert my lack of choice
into a position supporting NAT in IPv6.
I wouldn't presume to know why people use NAT because I'm not one of
them. (Beginning to feel like a minority.)
Well, if anyone's going to argue against them, shouldn't they at least
understand why they're being deployed?? ;-)
Obviously I can think of one or two reasons. But when I see NAT, I see
problems. Some people seem to actually like NAT. If they can help me
understand why, that could very well be valuable information.
Again, just to illustrate my thesis, in the above response you seem to
assume:
- network merging happens often enough that it is a significant
problem that must be addressed at the architectural level
(that is, here at the IETF);
- it costs a lot of time and money to renumber.
If either of these assertions is not true, you have to go back to the
beginning and start again.
I know the latter is true from personal experience. I've also come
close enough to the former to believe this is a real problem, but I
have nothing hard there.
But if we're going to attach dollar values to one end of the argument,
we need to do it everywhere. That's the only way to be able to
determine how we can make standards that cost people the least amount
of money. But nobody can predict the future, so I don't think this will
work in practice.