ietf
[Top] [All Lists]

RE: 128 bits should be enough for everyone, was:

2006-03-30 06:23:22
Anthony,

The problem with allocating numbers sequentially is the impact on
routers and routing protocols.
I have heard that the Japanese issue house numbers chronologically.
When you find the right block, you have to hunt
for the right number.  What you are suggesting is similar. You would
have as many routing table entries as hosts in the world.  The router
would not be affordable.  The traffic for routing entries would swamp
the net. The processing of these
routing advertisements would be impossible.  It doesn't scale!
The function of an address is to enable a router to find it. That is
why
we try to use hierarchical addressing even at the cost of numbering
space.

IMO one problem of the Internet is that it isn't hierarchical enough.
Consider the phone system:  country codes, area codes ...  This makes
the job of building a switch much easier. I think we should have
divided the world into 250 countries. Each country into 250
"provinces".  Yes, it would waste address space but it would make
routing much easier and more deterministic.  It would simplify BGP
drastically.  Current routing algorithms aren't that efffective.
Paths in the net are fairly inefficient. This increases latency,
apparent traffic on the net, and is a real waste of money.

Yes this would mean a mobile node needs to get new addresses as it
moves. So what. We already have DHCP.  Cell phones do a handoff
already.

Steve Silverman

-----Original Message-----
From: Anthony G. Atkielski [mailto:anthony(_at_)atkielski(_dot_)com]
Sent: Wednesday, March 29, 2006 11:27 PM
To: ietf(_at_)ietf(_dot_)org
Subject: Re: 128 bits should be enough for everyone,was:
IPv6 vs. Stupid
NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and
how to stop
them.)


Iljitsch van Beijnum writes:

So how big would you like addresses to be, then?

It's not how big they are, it's how they are allocated.
And they are
allocated very poorly, even recklessly, which is why they run out so
quickly.  It's true that engineers always underestimate required
capacity, but 128-bit addresses would be enough for anything ... IF
they were fully allocated.  But I know they won't be, and so the
address space will be exhausted soon enough.

We currently have 1/8th of the IPv6 address space set aside for
global unicast purposes ...

Do you know how many addresses that is? One eighth of 128 bits is a
125-bit address space, or

42,535,295,865,117,307,932,921,825,928,971,026,432

addresses. That's enough to assign 735 IP addresses to every cubic
centimetre in the currently observable universe (yes, I calculated
it). Am I the only person who sees the absurdity of wasting
addresses
this way?

It doesn't matter how many bits you put in an address, if you assign
them this carelessly.

... with the idea that ISPs give their customers /48 blocks.

Thank you for illustrating the classic engineer's mistake.  Stop
thinking in terms of _bits_, and think in terms of the
_actual number
of addresses_ available.  Of better still, start thinking
in terms of
the _number of addresses you throw away_ each time you set aside
entire bit spans in the address for any predetermined purpose.

Remember, trying to encode information in the address (which is what
you are doing when you reserve bit spans) results in
exponential (read
incomprehensibly huge) reductions in the number of available
addresses.  It's trivially easy to exhaust the entire address space
this way.

If you want exponential capacity from an address space, you have to
assign the addresses consecutively and serially out of that address
space.  You cannot encode information in the address.  You cannot
divided the address in a linear way based on the bits it
contains and
still claim to have the benefits of the exponential number of
addresses for which it supposedly provides.

Why is this so difficult for people to understand?

That gives us 45 bits worth of address space to use up.

You're doing it again.  It's not 45 bits; it's a factor of
35,184,372,088,832.

But rest assured, they'll be gone in the blink of an eye if the
address space continues to be mismanaged in this way.

It's generally accepted that an HD ratio of 80% should be
reachable
without trouble, which means we get to waste 20% of those
bits in
aggregation hierarchies.

No. It's not 20% of the bits, it's 99.9756% of your address
space that
you are wasting.

Do engineers really study math?

This gives us 36 bits = 68 billion /48s. That's several per person
inhabiting the earth, and each of those / 48s provides
65536 subnets
that have room to address every MAC address ever assigned without
breaking a sweat.

What happens when MAC addresses go away?  How are you providing for
the future when you allocate address space based on the
past?  Why not
just leave the address space alone, and allocate only the minimum
slice required to handle current requirements?

That's another problem of engineers: they think they can predict the
future, and they are almost always wrong.

What was the problem again?

And that's the third problem.

Remember also: any encoding of information into the address field
(including anything that facilitates routing) exponentially reduces
the total number of available addresses.  So it might look
like 2^128
addresses, but in reality it may be 2^40, or some other very small
number, depending on how much information you try to encode into the
address.



_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf



_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>