ietf
[Top] [All Lists]

Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-30 20:45:37
Thus spake "Anthony G. Atkielski" <anthony(_at_)atkielski(_dot_)com>
Iljitsch van Beijnum writes:
So how big would you like addresses to be, then?

It's not how big they are, it's how they are allocated.  And they are
allocated very poorly, even recklessly, which is why they run out so
quickly.  It's true that engineers always underestimate required
capacity, but 128-bit addresses would be enough for anything ... IF
they were fully allocated.  But I know they won't be, and so the
address space will be exhausted soon enough.

I once read that engineers are generally incapable of designing anything that will last (without significant redesign) beyond their lifespan. Consider the original NANP and how it ran out of area codes and exchanges around 40 years after its design -- roughly the same timeframe as the expected death of its designers. Will IPv6 last even that long? IMHO we'll find a reason to replace it long before we run out of addresses, even at the current wasteful allocation rates.

We currently have 1/8th of the IPv6 address space set aside for
global unicast purposes ...

Do you know how many addresses that is? One eighth of 128 bits is a
125-bit address space, or

42,535,295,865,117,307,932,921,825,928,971,026,432

addresses. That's enough to assign 735 IP addresses to every cubic
centimetre in the currently observable universe (yes, I calculated
it). Am I the only person who sees the absurdity of wasting addresses
this way?

It doesn't matter how many bits you put in an address, if you assign
them this carelessly.

That's one way of looking at it. The other is that even with just the currently allocated space, we can have 35,184,372,088,832 sites of 65,536 subnets of 18,446,744,073,709,551,616 hosts. Is this wasteful? Sure. Is it even conceivable to someone alive today how we could possibly run out of addresses? No.

Will someone 25 years from now reach the same conclusion? Perhaps, perhaps not. That's why we're leaving the majority of the address space reserve for them to use in light of future requirements.

... with the idea that ISPs give their customers /48 blocks.

Thank you for illustrating the classic engineer's mistake.  Stop
thinking in terms of _bits_, and think in terms of the _actual number
of addresses_ available.  Of better still, start thinking in terms of
the _number of addresses you throw away_ each time you set aside
entire bit spans in the address for any predetermined purpose.

Remember, trying to encode information in the address (which is what
you are doing when you reserve bit spans) results in exponential (read
incomprehensibly huge) reductions in the number of available
addresses.  It's trivially easy to exhaust the entire address space
this way.

If you want exponential capacity from an address space, you have to
assign the addresses consecutively and serially out of that address
space.  You cannot encode information in the address.  You cannot
divided the address in a linear way based on the bits it contains and
still claim to have the benefits of the exponential number of
addresses for which it supposedly provides.

Why is this so difficult for people to understand?

And sequential assignments become pointless even with 32-bit addresses because our routing infrastructure can't possibly handle the demands of such an allocation policy. The IETF has made the decision to leave the current routing infrastructure in place, and that necessitates a bitwise allocation model.

Railing against this decision is pointless unless you have a new routing paradigm ready to deploy that can handle the demands of a non-bitwise allocation model.

Why is this so difficult for you to understand?

That gives us 45 bits worth of address space to use up.

You're doing it again.  It's not 45 bits; it's a factor of
35,184,372,088,832.

But rest assured, they'll be gone in the blink of an eye if the
address space continues to be mismanaged in this way.

I take it you mean "the blick of an eye" to mean a span of decades? That is not the common understanding of the term, yet that's how long we've been using the current system and it shows absolutely no signs of strain.

It's generally accepted that an HD ratio of 80% should be reachable
without trouble, which means we get to waste 20% of those bits in
aggregation hierarchies.

No. It's not 20% of the bits, it's 99.9756% of your address space that
you are wasting.

Do engineers really study math?

To achieve bitwise aggregation, you necessarily cannot achieve better than 50% use on each delegation boundary. There are currently three boundaries (RIR, LIR, site), so better than 12.5% address usage is a lofty goal. Again, if you want something better than this, you need to come up with a better routing model than what we have today.

(And then throw in the /64 per subnet and you're effectively wasting 100% of the address space anyways, so none of this matters until that's gone)

This gives us 36 bits = 68 billion /48s. That's several per person
inhabiting the earth, and each of those / 48s provides 65536 subnets
that have room to address every MAC address ever assigned without
breaking a sweat.

What happens when MAC addresses go away?  How are you providing for
the future when you allocate address space based on the past?  Why not
just leave the address space alone, and allocate only the minimum
slice required to handle current requirements?

If EUI-64 addresses somehow go away, and there's no sign they will, we already have another mechanism ready for how to assign addresses within a subnet. In fact, it appears it's even the default in the next Windows.

If/when it ever matters how much address space we assign to subnets, we already know how to determine the necessary size and assign just that. Until we do need to do that (if ever), we can use the /64 convention without concerns. It's no big deal to change later if needed. In fact, we may end up not using that convention at all once IPv6 is actually rolled out to a significant part of the 'net.

That's another problem of engineers: they think they can predict the
future, and they are almost always wrong.

See above.

What was the problem again?

And that's the third problem.

Remember also: any encoding of information into the address field
(including anything that facilitates routing) exponentially reduces
the total number of available addresses.  So it might look like 2^128
addresses, but in reality it may be 2^40, or some other very small
number, depending on how much information you try to encode into the
address.

Again, the current identifier/location conflation combined with the routing paradigm leaves us no choice but to encode information into the IP address.

S

Stephen Sprunk        "Stupid people surround themselves with smart
CCIE #3723           people.  Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>