ietf
[Top] [All Lists]

Re: IPv6: Past mistakes repeated?

2000-04-25 12:14:47
From: "Keith Moore" <moore(_at_)cs(_dot_)utk(_dot_)edu>

I wasn't there, but I expect it would have sounded
even more preposterous for someone to have said:
"I'm absolutely positive that this Internet thing
will reach to nearly everyone on the planet in a
couple of decades, and therefore we need to make
sure it has many times more than 32 bits of address
space"  even though that's what eventually happened.

Well, I suppose it would depend on who was listening at the time.

but just because it happened once doesn't mean that
it will happen again.  we do well to learn from the
past, but the past doesn't repeat itself exactly.

It often repeats itself approximately, though.  And I predict that, since
computer engineers have been underestimating capacity requirements for the
past fifty years, they'll do it yet again this time, and then this
conversation will be repeated a few decades from now.

it often seems to be the case that if you design for
the long term, what you get back isn't deployable
in the near term because you've made the problem
too hard.

I dunno.  I don't think that adding two more digits in the 1960s to year
fields would have really made any problems too hard.  It was more a question
of people wanting to do things the easy way.  And you can't say that it was
a technological barrier, because they were still making Y2K-related mistakes
into the 1980s and beyond.

Also, making things hard is probably the second-most-common mistake of
computer engineers.  First they don't plan for the required capacity, and
then they get so lost in ever-expanding features that they never actually
deploy anything close to what they put in the specs.  This is more a problem
for software engineers than hardware enginers, though; hardware engineers
are constrained by the harsh realities of actually having to build things
that work, whereas software engineers can throw together buggy
approximations and limp along with those indefinitely.

It seems that, in the early days of [insert any major computer development
here], engineers just try to get something to work--anything.  Simplest is
best.  They make the capacity mistake, but they don't make the
software-bloat mistake.  However, ten years later, when they come up with
the "new and improved" version of whatever major development is in question,
not only do they _still_ underestimate capacity (even with past mistakes in
this area staring them in the face), but they slide into the abyss of
featuritis and software bloat, spending their time writing and reviewing and
honing ever-more-complex specifications that nobody will be able to actually
code, debut, and implement before the observable universe reaches thermal
equilibrium.  Sometimes the result is a "temporary" implementation that
actually does the job (even though it still has the capacity problem) and
eventually becomes the permaennt standard, with the original specs being
relegated to dusty archives somewhere.

... the fact that we got a global Internet out of IPv4
demonstrated to people that the concept was viable.

Absolutely.  We just underestimated capacity requirements, as usual.

with IPv6 there's a considerable amount of breathing
room for address space.

Why settle for a "considerable amount" (which time will show to be less
considerable than first believed) when a "nearly infinite amount" can be
designed into the project?

address space shortage is just one of many possible problems.

True, but it is much better understood than many of the others.  It's a
classic capacity problem.  And, as in most cases of capacity problems, it's
more a nuisance than something fun to work on, so it doesn't get as much
attention as it probably deserves, compared to the rest.

as long as the network keeps growing at exponential
rates we are bound to run into some other major hurdle
in a few years.

It won't continue to grow at exponential rates, but it will grow in a way
that doesn't match the addressing scheme, and the modes of its future growth
will probably be different from anything we can expect or foresee today.  So
the only option is to just allow for the unforeseeable, instead of trying to
predict and implement for it.

we can either try to anticipate every possible hurdle
that the Internet might face or we can concentrate on
fixing the obvious problems now and wait for the later
problems to make themselves apparent before trying
to fix them.

Address space is a pretty obvious problem.  But as I've indicated, it's not
a very exciting one.

on one hand you're saying that we cannot predict
how addresses will be used ...

Yes.

... and on the other hand you're saying that you
can definitely predict that we'll run out of IPv6
addreses very soon.

No, I cannot predict that.  But consider this:  If a fixed-length address
field is used, and we exceed the capacity of the field, we have a serious
problem.  If a variable-length address field is used, we do not exceed the
capacity of the field, ever.  And if we never exceed the capacity of the
fixed-address field, well, we'll never exceed the capacity of the variable
field either, will we?  So a variable field is a win-win scenario--you don't
have to hope that you can predict the future--whereas the fixed-length field
is a win-lose scenario, in which you have to just hope that your predictions
are right.

right now you're just pulling numbers out of thin air.

Yup.

you have yet to give any basis whatsoever to make such
a prediction credible.

It doesn't have to be credible, it just has to be possible.  The increment
of work required for a variable address space is not so very great, if it is
carefully thought through, and the potential benefits are enormous.  In
contrast, the increment of work required to fix exhaustion of a fixed
address space is staggering, and no amount of thinking things through will
prevent it, if and when the address space is exhausted.

wrong. you need to make design assumptions about
delegation points, and delegate portions of address
space, even for variable length addresses of
arbitrary size.

Any examples of this?  None come to my mind.  With a variable-length
address, you don't have to care how the entire space is allocated; you just
allocate the part that you are responsible for.  You do not need to predict
all future use with a variable-length address, whereas you _must_ predict
all future use with a fixed-length address.

true, several of the class A blocks were already
in use by then.  but initial allocations of IPv6
space are much more conservative .

With a finite address space, they can never be conservative enough, unless
you discard the inherent routability of the addressing scheme (by assigning
consecutive addresses one at a time on individual demand, for example).
This is because you cannot predict future evolution of the allocation
criteria for the address space, and you cannot expand the space.

no.  even with variable length addresses you want
to exercise some discipline about how you allocate
addresses.

Why?

otherwise you end up some addresses being much
longer than necessary, and this creates
inefficiency and problems for routing.

In what cases would a variable-length address be longer than necessary?  I
can easily think of allocation schemes that would break the routability of a
fixed-length addressing scheme, but I don't see this problem with
variable-length addresses.

nope.  even phone numbers are allocated by prefix blocks.

Not on a worldwide basis.  The U.S. treats its address space as fixed (which
is one reason why it is now in trouble), but this isn't really true for the
world overall.  If a call is routed to Vulgaria from the U.S., nobody in the
U.S. has to care how Vulgaria allocates its telephone numbers.

you seem to be forgetting that routing hierarchy
doesn't necessarily follow address hierarchy.

It can't, with fixed addresses.  At some point you have to allocate
non-contiguous addresses for the same routing destination, and then your
routing breaks.

With a variable-length address, this is not a problem.

actually, no you can't.  because addresses still have
to fit within the minimum size of a datagram on the network
and have room left over for payload.

Don't use fixed datagrams sizes.  Or, if you must, implement within limits,
but leave indicators for exceptions to the limits, so that you don't have to
redo it all from scratch when you eventually exceed the limits.

eventually I can buy.  but your predictions about the specific
timeframe have no substance.

Time will tell.

In 2030 (if I'm still doing this stuff) I will be much
more concerned about the overflow of UNIX time in 2038.

Seems that UNIX wasn't engineered with the necessary capacity, either.

variable length addresses with no constriants on
address growth have their own set of routing problems.

Such as?

but this same impossibility means that we do not
know whether we should put today's energy into making
variable length addresses work efficiently or into
something else.

We should put today's energy into designing systems that make no irrevocable
assumptions about tomorrow.

... at least today it seems much easier to design fast
routing hardware (or software) that looks at fixed offsets
within a packet, than to design hardware or software that
looks at variable offsets.

It's always easier.  However, you need to design something into the protocol
that tells the hardware when it is dealing with information outside the
envelope for which it was designed.  Then you can trap the exception and fix
things.  If you simply assume that there will never be any exceptions, you
get dramatic and dangerous failures when they finally come along.

the biggest problem with IPv6 is the difficulty in
getting it deployed.  and IPv6 has enough barriers to
deployment already without variable length addresses
making it harder to route than IPv4.

Maybe it is overambitious.

The easiest way to get it deployed, though, would be to have Microsoft ship
it as part of Windows.  In a year or two, everyone would be using it.  And
as long as this doesn't happen, it will never get deployed (at least not
anywhere near the desktop or home user).

it depends on how many places have that length
limitation wired-in.

You build them so that the wiring is easy to change and is not duplicated
any more times than absolutely necessary.

no you can't, because you still need globally scoped
addresses, and you have the incremental upgrade problem.

That doesn't seem to have stopped the world's telephone networks from
connecting to each other, and they had exactly this problem.

if you change the root of the address space you have
to do it for everything at once.

Only if everything already assumes that the root it sees is the only root
that exists, which is not a good way to design things.  In a telephone
system, the "root" can be my central office, or my area code, or my country
code, depending on how far I'm calling.

  -- Anthony