ietf
[Top] [All Lists]

RE: IPv6: Past mistakes repeated?

2000-04-24 20:15:00
A brief history lesson...

There was some concern about a 32-bit address space.  MIT-LCS
proposed 48 (or 64-bit) addresses but that was coupled with a
reduction of the TCP sequence number to 16 bits.  After some
discussion, we settled on 32-bits based on the computing 
resources available at the time.  At that time, there was no
separate IP header, only addressing fields in the TCP header.

Around that time, the ARPANET had recently scaled up from
8-bit host addresses to 24-bits.  Seemed unlikely that anyone 
would build more than 100 ARPANET-sized networks with its huge
IMPs and PDP-10 mainframe computers (and UCLA's 360).  48-bit 
Ethernet addressing wasn't around yet; otherwise we probably 
would have picked 64 bits just to not have to deal with ARP. 
This was before Moore's law; Intel had just released the 
8008 microprocessor.  There were less than 100,000 large 
commercial buildings (>500,000 sq. ft) in the world; seemed
that the number of class-c addresses were sufficient.  For
better or worse, the size of the address field went unchanged
from the original version of TCP (before IP was a separate 
header).

We were caught short by a technology paradigm shift coming
from semiconductor physics.  If computers rode the same 
technology advancement curve as cars, we wouldn't be having
an address space problem now.

We cannot predict the next big technology paradigm shift.
The real lesson to learn from IPv4 - IPv6 (which I think 
was described by Knuth in regards to conversion of computer 
instruction sets but I can't find the reference) is the 
cost of delaying conversion.  For the longer you delay
the inevitable, the more installed base you have to 
convert and the exponentially higher the resulting cost.  

The excellent engineering to keep v4 running was too good
and has slowed the inevitable movement to larger address
spaces.  While there are no guarantees that 128-bits won't
be exhausted sometime in the future, we do know a lot more 
about address space management and allocation than when Jon 
started handing out net numbers. The Internet has always 
been a balancing act between being ready for the future 
and getting something working today.  IPv6 will not change 
this, so we need to be prepared for change.  The next "big" 
problem facing the Internet that will require a broad-scale 
swap-out of software probably won't be address-space related. 

We need to move forward with IPv6 both by deploying it in 
the "core" and setting a time-frame after which non-IPv4 
compatible addresses will be assigned.  Unless there is a 
clear reason to move, no one wants to change software just 
to change.  Once IPv6 is in the major backhaul carriers, ISPs
can role out improved services based on IPv6 which will be 
the real reason end-users upgrade.  Seems like a real 
leadership vacuum here...

Jim

-----Original Message-----
From: Keith Moore [mailto:moore(_at_)cs(_dot_)utk(_dot_)edu]
Sent: Monday, April 24, 2000 2:36 PM
To: Anthony Atkielski
Cc: ietf(_at_)ietf(_dot_)org
Subject: Re: IPv6: Past mistakes repeated? 


What I find interesting throughout discussions that mention 
IPv6 as a
solution for a shortage of addresses in IPv4 is that people see the
problems with IPv4, but they don't realize that IPv6 will 
run into the
same difficulties.  _Any_ addressing scheme that uses addresses of
fixed length will run out of addresses after a finite 
period of time,

I suppose that's true - as long as addresses are consumed at a rate
faster than they are recycled.  But the fact that we will run out of
addresses eventually might not be terribly significant - the Sun will
also run out of hydrogen eventually, but in the meantime we still find
it useful.

and that period may be orders of magnitude shorter than anyone might
at first believe.

it is certainly true that without careful management IPv6 address
space could be consumed fairly quickly.  but to me it looks like that
with even moderate care IPv6 space can last for several tens of years.

Consider IPv4.  Thirty-two bits allows more than four billion
individual machines to be addressed.  

not really.  IP has always assumed that address space would be
delegated in power-of-two sized "chunks" - at first those chunks only
came in 3 sizes (2**8, 2**16, or 2**24 addresses), and later on it
became possible to delegate any power-of-two sized chunk.  but even
assuming ideally sized allocations, each of those chunks would on
average be only 50% utilized. 

so every level of delegation effectively uses 1 of those 32 bits, and
on average most parts of the net are probably delegated 4-5 levels
deep.  (IANA/regional registry/ISP/customer/internal). so we end up
effectively not with 2**32 addresses but with something like 2**27 or
2**28.  (approximately 134 million or 268 million)

(see also RFC 1715 for a different analysis, which when applied to
IPv4, yields similar results for the optimistic case)

allocating space in advance might indeed take away another few bits.
but given the current growth rate of the internet it is necessary.
the internet is growing so fast that a policy of always allocating
only the smallest possible chunk for a net would not only be
cumbersome, it would result in poor aggregation in routing tables and
quite possibly in worse overall utilization of address space.

but if it someday gets easier to renumber a subnet we might then find
it easier to garbage collect, and recycle, fragmented portions of
address space.  and if the growth rate slowed down (which for various
reasons is possible) then we could do advance allocation more
conservatively.

It should be clear that IPv6 will have the same problem.  The space
will be allocated in advance.  Over time, it will become 
obvious that
the original allocation scheme is ill-adapted to changing 
requirements
(because we simply cannot foresee those requirements).  Much, _much_
sooner than anyone expects, IPv6 will start to run short of 
addresses,
for the same reason that IPv4 is running short.  It seems impossible
now, but I suppose that running out of space in IPv4 seemed 
impossible
at one time, too.

IPv6 allocation will have some of the same properties of IPv4
allocation.  We're still using power-of-two sized blocks, we'll still
waste at least one bit of address space per level of delegation.  It
will probably be somewhat easier to renumber networks and recycle
address - how much easier remains to be seen.

OTOH, I don't see why IPv6 will necessarily have significantly more
levels of assignment delegation.  Even if it needs a few more levels,
6 or 7 bits out of 128 total is a lot worse than 4 or 5 bits 
out of 32.

The allocation pattern is easy to foresee.  Initially, enormous
subsets of the address space will be allocated carelessly and
generously, because "there are so many addresses that we'll 
never run
out" 

I don't know where you get that idea.  Quite the contrary, the
regional registries seem to share your concern that we will use up
IPv6 space too quickly and *all* of the comments I've heard about the
initial assignment policies were that they were too conservative.
IPv6 space does need to be carefully managed, but it can be doled out
somewhat more generously than IPv4 space.

and because nobody will want to expend the effort to achieve
finer granularity in the face of such apparent plenty.  

First of all, having too fine a granularity in allocation prevents you
from aggregating routes.  Second, with power-of-two sized allocations
there's a limit to how much granularity you can get - even if you
always allocate optimal sized blocks.

This mistake will be repeated for each subset of the address space
allocated, by each organization charged with allocating the space.

It's not clear that it's a mistake.  it's a tradeoff between having
aggregatable addresses and distributed assignment on one hand and
conserving address space on the other.  and the people doing address
assignment these days are quite accustomed to thinking in these terms.

If you need further evidence, look at virtual memory address spaces.
Even if a computer's architecture allows for a trillion bits of
addressing space, it invariably becomes fragmented and 
exhausted in an
amazingly short time.  

this is only amazing to those who haven't heard of Moore's law.
(presumably the same set of people who thought DES would 
never be broken)

on the other hand, it's not clear how valid this analogy is for
predicting the growth of the Internet - just because Moore's law (if
it keeps on working) might predict that in a decade we could
eventually have thousands of network-accessible computing devices for
everyone on the planet, doesn't mean that those people would be able
to deal with thousands of such devices.  and there do appear to be
limits to the number of human beings that the planet can support.  and
if by that time the robot population exceeds the human population then
I'm happy to let the robots solve the problem of upgrading to a new
version of IP.

and as for other planets, all kinds of assumptions about the current
Internet fail when you try to make it work at interplanetary
transmission latencies.  so if we do manage to significantly populate
other planets or if we find extraterrestrial species that we want to
network with, we'll have to build a new architecture.  and people are
already working on that.

The only real solution to this is an open-ended addressing 
scheme--one
to which digits can be added as required.  

variable length addresses do have some nice properties.  there are
also some drawbacks.

fwiw, phone numbers do in fact have a fixed maximum length which is
wired into devices all over the planet - not just in the phone system
but in numerous computer databases, etc..  it is not much easier to
increase the overall length of phone numbers than it is to make IP
addresses longer.  and once you set a fixed maximum length then it's
just a matter of representation - do you have a variable-length
address field or do you have a fixed-length field with zero padding?
fixed-length fields are a lot easier for routers to deal with.  (and
for similar reasons a lot of software uses fixed-length fields for
phone numbers)

128-bit IPv6 addresses are roughly equivalent to 40 digits, which IIRC
is a lot longer than the maximum size of a phone number under E.164.
(sorry, I don't have a copy handy to check)

and the means by which IPv6 addresses are being allocated is actually
not so different from the means in which phone numbers are allocated -
the major exception being that IPv6 prefixes are assigned to major
ISPs rather than to geographic regions.  (the latter difference might
affect routing but probably does not affect allocation efficiency).

so I think the bottom line answer to your message is that your concers
are valid (if perhaps a bit exaggerated) and an allocation mechanism
similar to what you suggest is already in place.

Keith




<Prev in Thread] Current Thread [Next in Thread>