ietf
[Top] [All Lists]

Re: How many IP addresses?

2000-04-25 06:50:04
In message 
<4(_dot_)2(_dot_)2(_dot_)20000425090404(_dot_)00a3c3f0(_at_)pop(_dot_)dial(_dot_)pipex(_dot_)com>,
 Graham Klyne wri
tes:
At 11:06 PM 4/23/00 -0500, Richard Shockey wrote:
With "always on" IP and IP on anything this is closer to reality than we 
might think. In order to permit a reasonable allocation of addresses with 
room for growth the idea of 25 IP address per household and 10 person 
actually seems conservative.

Following this line of thought, I'd suggest taking the number of electrical 
outlets and multiplying by some suitable constant (say, 10, or 1000).

And what about all those little wireless-connected gadgets;  an IP address 
for each TV remote-control (where everyone has their own, of course, for 
personalized access and prioritizing control conflicts...)

I've been in rooms where people have walked through exactly calculation.  
Let's throw a few numbers around.

Assume that the average person in the world has 1000 outlets.  That's 
preposterously large for even Bill Gates' house, I suspect, and it doesn't 
even account for dividing by the number of people per house.  But let's stick 
with 1000.  Assume that there are 25*10^9 people in the world -- 4x the 
current population.  And allocate 10 IP addresses for each of those outlets.
That means that we need a minimum of 25*10^9 IP addresses, plus allowances for 
delegation on TLA boundaries, smaller provider chunks, homes, etc.

So -- when I divide 2^128 by 25*10^9, I get ~2^80.  That's right -- 80 bits 
worth of address space for allocation inefficiencies.  If, at each of three
levels, we really use just one address out of every 2^16, we *still* have
32 bits left over.

Conclusion:  if we don't do things differently, we have more than enough 
addresses for any conceivable size of the Internet.  (In fact, if we assume 
that there are 300*10^9 stars in the Milky Way, we're not very far from being 
able to handle interstellar networking, at least in terms of the number of 
hosts.  But we may have to redefine the TTL field to let packets last for 50K 
years or so...)

OK -- if 2^128 is that large, why did I (among others) conclude that we needed 
some headroom?  There's a lot of history showing that address space *always* 
runs out, and that the consumption often comes from changing your paradigm.  A 
routing/addressing scheme like 8+8 is an example of what I mean.  In another 
vein, I have a project going that requires 2^16 addresses per host, and maybe 
2^32.  There are undoubtedly other new ideas we can play with *if* we have 
enough address space.  But we have to engineer this in some fashion that 
permits efficient use of these addresses by hosts and (especially) routers.  
(An earlier poster wrote that you "just have to write the code".  That's the 
wrong idea -- big iron routers don't use "code" to do forwarding, they use 
silicon, and very fast silicon at that.  There are routers in production use 
today that are handling OC-192 -- 10 Gbps -- links.  You can't do that in 
software.)

                --Steve Bellovin




<Prev in Thread] Current Thread [Next in Thread>