ietf
[Top] [All Lists]

Re: IPv6: Past mistakes repeated?

2000-04-24 15:20:03
From: "Keith Moore" <moore(_at_)cs(_dot_)utk(_dot_)edu>


I suppose that's true - as long as addresses are consumed
at a rate faster than they are recycled.  But the fact that
we will run out of addresses eventually might not be terribly
significant - the Sun will also run out of hydrogen
eventually, but in the meantime we still find it useful.

Ah ... famous last words.  I feel confident that similar words were said
when the original 32-bit address scheme was developed:

"Four billion addresses ... that's more than one computer for every person
on Earth!"

"Only a few companies are every going to have more than a few computers ...
just give a Class A to anyone who asks."

And so now we are running out.

But the real thing here is, even with the wisest allocations imaginable, we
would still run out much, much faster than the total number of addresses in
the address space might lead one to believe.  And that is because we have to
_predict_ how addresses will be used, and we cannot easily change the
consequences of those predictions.  So there are always significant blocks
of unused addresses, even as we run out of addresses in other ways.

it is certainly true that without careful management
IPv6 address space could be consumed fairly quickly.
but to me it looks like that with even moderate care IPv6
space can last for several tens of years.

Ten years, I'd say.  And then redefining a new addressing scheme will cost a
thousand times more than it will today, at least.  Even washing machines
will have to get new programming!

not really.  IP has always assumed that address space
would be delegated in power-of-two sized "chunks" ...

Aye, there's the rub.  It has always been necessary to _assume_ and
_delegate_ address space, because the address space is finite in size.  It
has always been necessary to predict the future, in other words, in a domain
where the future is very uncertain indeed.

... at first those chunks only came in 3 sizes (2**8,
2**16, or 2**24 addresses), and later on it became possible
to delegate any power-of-two sized chunk.

But by then, a lot of the biggest chunks were already in use.

... but even assuming ideally sized allocations, each
of those chunks would on average be only 50% utilized.

Right.  So the only solution is to get rid of the need to allocate in
advance in the first place.

Think of variable addressing and the needs of the tiny country of Vulgaria.
Vulgaria needs address space.  Okay, so you say, well, Vulgaria is in, say,
Eastern Europe, and all addresses in Eastern Europe begin with 473.  All the
addresses from 4731 to 4738 are taken.  So you add address 4739, which now
means "everyone else in Eastern Europe," and you assign 47391 to Vulgaria.
That's all you have to do.  If Vulgaria wants to further subdivide, it can;
it can assign 473911 to Northern Vulgaria, and 473912 to the rugged
Vulgarian Alps region.  It doesn't matter to anyone except Vulgaria.

Given this, North America doesn't have to change anything at all.  Before,
anything that started with 473 went to Eastern Europe, and that's still the
case.  Some European routers have to smarten up a bit, because now they have
to be aware that 4739 goes to another routing point that handles "all other"
Eastern European countries.  And this new routing point (heck, maybe
Vulgaria will host it, eh?) must know that 47391 goes to Vulgaria, but
nothing more.  Only routers in Vulgaria itself need to care where 473911
goes as compared to 473912.  And only routers in the rugged Vulgarian Alps
need to know that 4739124 goes to Smallville, and 4739126 goes to
Metropolis, both cities nestled there in the Alps.  And since the addressing
scheme is open ended, even if Vulgaria one day has ten trillion computers on
the Net, nothing outside Vulgaria needs to change.

allocating space in advance might indeed take away
another few bits. but given the current growth rate
of the internet it is necessary.

Only with a fixed address space.

the internet is growing so fast that a policy of
always allocating only the smallest possible chunk
for a net would not only be cumbersome, it would result
in poor aggregation in routing tables and quite
possibly in worse overall utilization of address space.

Exactly ... but that's the magic of the variable address scheme.  You only
have to allocate disparate chunks in a fixed address scheme because the size
of each chunk is limited by the length of an address field.  But if the
address field is variable, you can make any chunk as big as you want.  If
you have addresses of 4739124xx initially (Metropolis only had a few
machines at first), and you run out of addresses after 473912498, you just
make 473912499 point to "more addresses for Metropolis," and start
allocating, say 4739124990001 through 4739124999998 (you always leave at
least one slot empty so that it can point to "more addresses").

I don't know where you get that idea.

That's how it happened for IPv4.

Quite the contrary, the regional registries seem to
share your concern that we will use up IPv6 space too
quickly and *all* of the comments I've heard about the
initial assignment policies were that they were too
conservative.

No matter how conservative they are, the finite length of the address field
will eventually cause problems, and much sooner than anyone thinks.

IPv6 space does need to be carefully managed, but it
can be doled out somewhat more generously than IPv4 space.

And in 2030:

"Of course, nobody in 2000 realized that we'd eventually need 1048576
listener addresses for each of our cellphones and wristwatches, even if that
is obvious now.  As a result, we've nearly exhausted the IPv6 space, and we
are looking at a new scheme.  We think that a new fixed address space of 192
bits will surely be all anyone will ever need before the sun turns cold, and
we'll never have this problem again."

First of all, having too fine a granularity in allocation
prevents you from aggregating routes.

Only with a fixed address length.

Look at it this way:  Think of fixed-length addresses as rays streaming out
from the center of a circle.  You allocate rays so that they are adjacent to
facilitate routing.  However, no matter what you do, the time will come when
you just cannot allocate the number of adjacent rays that you require for
some purpose, and at that point, your routing system breaks, or you deny
someone an allocation.

Now think of variable-length addresses as rays also--but in this case, you
have the option of splitting any ray to create more rays.  Now you can
allocate adjacent rays as required, and if you run out of rays, you split a
few to make more, and they are still adjacent.  You can split them as many
times as you want.

Or you can look at it as numbers.  Fixed-length addresses are integers.
Variable-length addresses are real numbers.  You can only have so many
integers between 1 and 100, but you can have an infinite number of real
numbers between those two limits--you just keep adding decimal places.

It's not clear that it's a mistake.  it's a tradeoff
between having aggregatable addresses and distributed
assignment on one hand and conserving address space
on the other.

It's a mistake in that it is imposed by a finite address space.  Eliminate
the finite address space, and this is no longer an issue.  You don't have to
care about aggregating addresses--you just create more, if you need them.

Surely you've seen this in technical manuals: If you need to put something
between section 1 and section 2, you just add a section 1.5.  That's
variable addressing.  But if you limit yourself to fixed addressing, your
only choice is to move every section in the manual so that you can put in a
different section 2 (making the old section 2 section 3, and the old section
3 section 4, and so on).

this is only amazing to those who haven't heard of
Moore's law.

Moore's law isn't the cause of the problem.  The problem isn't created by
advances in hardware or software or network capacity; it is created by the
inevitable impossibility of knowing the future with certainty.  You cannot
know what a network will look like in the infinite future, so there is no
way to optimally allocate addresses in a fixed address space without
eventually being required to do everything over again.

fwiw, phone numbers do in fact have a fixed maximum
length which is wired into devices all over the planet ...

Yes, but it's way easier to make a field larger than it is to build and
implement a new protocol.  And if you design for variable length, you can
trap anything that exceeds your buffer space and fix it when the time comes
(or before the time comes, if you check ahead).  If you don't design for
variable length, you just rewrite everything.

... it is not much easier to increase the overall
length of phone numbers than it is to make IP
addresses longer.

But in a variable-length system, you can make both ends variable.  You can
thus add length at either end without affecting more than a tiny part of the
network (the part that has to know about the portions of the address that
you've extended).  Routers that look only at the parts that haven't changed
(in the middle) don't require any modifications.

  -- Anthony