Ah ... famous last words. I feel confident that similar words were said
when the original 32-bit address scheme was developed:
"Four billion addresses ... that's more than one computer for every person
"Only a few companies are every going to have more than a few computers ...
just give a Class A to anyone who asks."
I wasn't there, but I expect it would have sounded even more preposterous
for someone to have said: "I'm absolutely positive that this Internet thing
will reach to nearly everyone on the planet in a couple of decades, and
therefore we need to make sure it has many times more than 32 bits of
address space" even though that's what eventually happened.
but just because it happened once doesn't mean that it will happen again.
we do well to learn from the past, but the past doesn't repeat itself
it often seems to be the case that if you design for the long
term, what you get back isn't deployable in the near term because
you've made the problem too hard. and if you design for the near
term, what you get back will break in the long term. but at least
you get somewhere with the latter approach - the fact that we got
a global Internet out of IPv4 demonstrated to people that the
concept was viable.
today's design constraints aren't the same as tomorrow's. with
today's Internet a lack of address space is a big problem. with
IPv6 there's a considerable amount of breathing room for address
space. address space shortage is just one of many possible problems.
as long as the network keeps growing at exponential rates we are
bound to run into some other major hurdle in a few years. it might
be address space but the chances are good that before we hit that
limitation again that we will run into some other fundamental barrier. we
can either try to anticipate every possible hurdle that the Internet
might face or we can concentrate on fixing the obvious problems now
and wait for the later problems to make themselves apparent before
trying to fix them. if we try to anticipate every major hurdle,
we will never agree on how to solve all of those problems, and the
Internet will bog down to the point that it's no longer useful.
But the real thing here is, even with the wisest allocations imaginable, we
would still run out much, much faster than the total number of addresses in
the address space might lead one to believe. And that is because we have to
_predict_ how addresses will be used, and we cannot easily change the
consequences of those predictions.
no, that's just bogus. on one hand you're saying that we cannot predict
how addresses will be used, and on the other hand you're saying that you
can definitely predict that we'll run out of IPv6 addreses very soon.
Ten years, I'd say.
right now you're just pulling numbers out of thin air. you have yet to
give any basis whatsoever to make such a prediction credible.
not really. IP has always assumed that address space
would be delegated in power-of-two sized "chunks" ...
Aye, there's the rub. It has always been necessary to _assume_ and
_delegate_ address space, because the address space is finite in size.
wrong. you need to make design assumptions about delegation points, and
delegate portions of address space, even for variable length addresses
of arbitrary size.
... at first those chunks only came in 3 sizes (2**8,
2**16, or 2**24 addresses), and later on it became possible
to delegate any power-of-two sized chunk.
But by then, a lot of the biggest chunks were already in use.
true, several of the class A blocks were already in use by then.
but initial allocations of IPv6 space are much more conservative .
... but even assuming ideally sized allocations, each
of those chunks would on average be only 50% utilized.
Right. So the only solution is to get rid of the need to allocate in
advance in the first place.
no. even with variable length addresses you want to exercise some
discipline about how you allocate addresses. otherwise you end up
some addresses being much longer than necessary, and this creates
inefficiency and problems for routing.
allocating space in advance might indeed take away
another few bits. but given the current growth rate
of the internet it is necessary.
Only with a fixed address space.
nope. even phone numbers are allocated by prefix blocks.
the internet is growing so fast that a policy of
always allocating only the smallest possible chunk
for a net would not only be cumbersome, it would result
in poor aggregation in routing tables and quite
possibly in worse overall utilization of address space.
Exactly ... but that's the magic of the variable address scheme. You only
have to allocate disparate chunks in a fixed address scheme because the size
of each chunk is limited by the length of an address field.
no, there are lots of other reasons for doing it. you seem to be
forgetting that routing hierarchy doesn't necessarily follow
address hierarchy. and that the point of a network protocol is
to get payload, rather than addresses, from one point to another.
But if the address field is variable, you can make any chunk as big
as you want.
actually, no you can't. because addresses still have to fit within
the minimum size of a datagram on the network and have room left
over for payload.
Quite the contrary, the regional registries seem to
share your concern that we will use up IPv6 space too
quickly and *all* of the comments I've heard about the
initial assignment policies were that they were too
No matter how conservative they are, the finite length of the address field
will eventually cause problems, and much sooner than anyone thinks.
eventually I can buy. but your predictions about the specific
timeframe have no substance.
IPv6 space does need to be carefully managed, but it
can be doled out somewhat more generously than IPv4 space.
And in 2030:
In 2030 (if I'm still doing this stuff) I will be much more concerned
about the overflow of UNIX time in 2038.
First of all, having too fine a granularity in allocation
prevents you from aggregating routes.
Only with a fixed address length.
variable length addresses with no constriants on address growth
have their own set of routing problems.
this is only amazing to those who haven't heard of
Moore's law isn't the cause of the problem. The problem isn't created by
advances in hardware or software or network capacity; it is created by the
inevitable impossibility of knowing the future with certainty.
but this same impossibility means that we do not know whether we should
put today's energy into making variable length addresses work efficiently
or into something else. so we made a guess - a design compromise -
that we're better off with very long fixed-length addresses because
fast routing hardware is an absolute requirement, and at least today
it seems much easier to design fast routing hardware (or software)
that looks at fixed offsets within a packet, than to design hardware or
software that looks at variable offsets.
now it might turn out that in a few decades this will cause an address
space shortage. but that doesn't mean that we made a wrong decision
because in today's world having fixed offsets for addresses within
a packet (and thus fast routing) is more important than having arbitraily
the biggest problem with IPv6 is the difficulty in getting it deployed.
and IPv6 has enough barriers to deployment already without variable
length addresses making it harder to route than IPv4.
fwiw, phone numbers do in fact have a fixed maximum
length which is wired into devices all over the planet ...
Yes, but it's way easier to make a field larger than it is to build and
implement a new protocol.
it depends on how many places have that length limitation wired-in.
we might could have made the IPv4 address larger except that every
application on the net assumed fixed-length addresses.
... it is not much easier to increase the overall
length of phone numbers than it is to make IP
But in a variable-length system, you can make both ends variable.
no you can't, because you still need globally scoped addresses, and
you have the incremental upgrade problem. if you change the
root of the address space you have to do it for everything at once.