ietf
[Top] [All Lists]

Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: Stupid NAT tricks and how to stop them.)

2006-03-29 21:29:13
Iljitsch van Beijnum writes:

So how big would you like addresses to be, then?

It's not how big they are, it's how they are allocated.  And they are
allocated very poorly, even recklessly, which is why they run out so
quickly.  It's true that engineers always underestimate required
capacity, but 128-bit addresses would be enough for anything ... IF
they were fully allocated.  But I know they won't be, and so the
address space will be exhausted soon enough.

We currently have 1/8th of the IPv6 address space set aside for
global unicast purposes ...

Do you know how many addresses that is? One eighth of 128 bits is a
125-bit address space, or

42,535,295,865,117,307,932,921,825,928,971,026,432

addresses. That's enough to assign 735 IP addresses to every cubic
centimetre in the currently observable universe (yes, I calculated
it). Am I the only person who sees the absurdity of wasting addresses
this way?

It doesn't matter how many bits you put in an address, if you assign
them this carelessly.

... with the idea that ISPs give their customers /48 blocks.

Thank you for illustrating the classic engineer's mistake.  Stop
thinking in terms of _bits_, and think in terms of the _actual number
of addresses_ available.  Of better still, start thinking in terms of
the _number of addresses you throw away_ each time you set aside
entire bit spans in the address for any predetermined purpose.

Remember, trying to encode information in the address (which is what
you are doing when you reserve bit spans) results in exponential (read
incomprehensibly huge) reductions in the number of available
addresses.  It's trivially easy to exhaust the entire address space
this way.

If you want exponential capacity from an address space, you have to
assign the addresses consecutively and serially out of that address
space.  You cannot encode information in the address.  You cannot
divided the address in a linear way based on the bits it contains and
still claim to have the benefits of the exponential number of
addresses for which it supposedly provides.

Why is this so difficult for people to understand?

That gives us 45 bits worth of address space to use up.

You're doing it again.  It's not 45 bits; it's a factor of
35,184,372,088,832.

But rest assured, they'll be gone in the blink of an eye if the
address space continues to be mismanaged in this way.

It's generally accepted that an HD ratio of 80% should be reachable
without trouble, which means we get to waste 20% of those bits in  
aggregation hierarchies.

No. It's not 20% of the bits, it's 99.9756% of your address space that
you are wasting.

Do engineers really study math?

This gives us 36 bits = 68 billion /48s. That's several per person
inhabiting the earth, and each of those / 48s provides 65536 subnets
that have room to address every MAC address ever assigned without
breaking a sweat.

What happens when MAC addresses go away?  How are you providing for
the future when you allocate address space based on the past?  Why not
just leave the address space alone, and allocate only the minimum
slice required to handle current requirements?

That's another problem of engineers: they think they can predict the
future, and they are almost always wrong.

What was the problem again?

And that's the third problem.

Remember also: any encoding of information into the address field
(including anything that facilitates routing) exponentially reduces
the total number of available addresses.  So it might look like 2^128
addresses, but in reality it may be 2^40, or some other very small
number, depending on how much information you try to encode into the
address.



_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>