ietf
[Top] [All Lists]

RE: The internet architecture

2008-12-28 14:42:51
It depends on what level you are looking at the problem from

In my opinion, application layer systems should not make assumptions that have 
functional (as opposed to performance) implications on the inner semantics of 
IP addresses. From the functionality point of view an IP address should be 
considered by the application to be no more than an opaque identifier.

The reason for that is precisely to allow the routing layer to make 
architectural decisions that do apply semantics to the address and which change 
over periods of time that are relevant to routing layer deployment cycles 
(there are plenty of pre-1995 Internet hosts still in service, I will wager a 
rather smaller percentage of backbone routers from 1995 are still in service 
:-).

Which is why I want to see ad-hoc semantics that applications attempt to apply 
to IP addresses being replaced by DNS level (ie reverse DNS) facilities that 
achieve the same effect in a fashion that does not result in applications 
breaking if those assumptions are broken.


On the geographic nature of IP addresses, clearly some level of aggregation is 
essential but it is equally clear that 100% clean aggregation is never going to 
be achievable either. The longer a block is in service the more it gets 'bashed 
about'. Entropy increases.

At the moment the Internet architecture has a built in assumption that the 
system is going to grow. And that keeps the chaos factor in check because the 
new blocks are a significant proportion of the whole and have nice regular 
assignments at issue. But what when the system stops growing? How do we keep 
the chaos to an acceptable fraction?

This leads me to consider an IP address block assignment as being an inherently 
term limited affair, with the sole exception being root DNS where perpetual 
assignments are going to be necessary. The terms need to be long, years, 
probably decades at minimum. But there needs to be a built in assumption that 
over time there will be a 'recycling' of broken down, atomized address blocks 
into larger clumps that aggregate nicely. Which in turn is only possible if 
nobody (apart from core DNS) cares about their IP address having a specific 
value.



-----Original Message-----
From: ietf-bounces(_at_)ietf(_dot_)org on behalf of Bryan Ford
Sent: Wed 12/24/2008 1:50 PM
To: macbroadcast
Cc: ietf(_at_)ietf(_dot_)org
Subject: Re: The internet architecture
 
On Dec 22, 2008, at 10:51 PM, macbroadcast wrote:
IP does not presume hierarchical addresses and worked quite well  
without it for nearly 20 years.
IP addresses are topologically independent. Although since CIDR,  
there has been an attempt to make them align with aspects of the  
graph.
Ford's paper does not really get to the fundamentals of the problem.

I would suggest that deeper thought is required.

would like to know bryans  opinion

I think I missed some intermediate messages in this discussion thread,  
but I'll try. :)

IP addresses are just an address format (two, actually, one for IPv4  
and another for IPv6); their usefulness and effectiveness depends on  
exactly how they are assigned and used.  CIDR prescribes a way to  
assign and use IP addresses that in theory facilitates aggregation of  
route table entries to make the network scalable, _IF_ those addresses  
are assigned in a hierarchical fashion that directly corresponds to  
the network's topology, which must also be strictly hierarchical in  
order for that aggregation to be possible.  That is, if an edge  
network has only one upstream provider and uses in its network only IP  
addresses handed out from that provider's block, then nobody else in  
the Internet needs to have a routing table entry for that particular  
edge network; only for the provider.  But that whole model breaks down  
as soon as that edge network wants (god forbid!) a bit of reliability  
by having two redundant links to two different upstream providers -  
i.e., "the multihoming problem", and hence all the concern over the  
fact that BGP routing tables are ballooning out of control because  
_everybody_ wants to be multihomed and thus wants their own public,  
non-aggregable IP address range, thus completely defeating the  
scalability goals of CIDR.

For some nice theoretical and practical analysis indicating that any  
hierarchical CIDR-like addressing scheme is fundamentally a poor match  
to a well-connected network topology like that of the Internet, see  
Krioukov et al., "On Compact Routing for the Internet", CCR '07.  They  
also cast some pretty dark clouds over some alternative schemes as  
well, but that's another story. :)

But to get back to the original issue, CIDR-based IP addressing isn't  
scalable unless the network topology is hierarchical and address  
assignment is done according to network topology: i.e., IP addresses  
MUST be dependent on topology in order for CIDR-based addressing to  
scale.  But in practice, at least up to this point, other concerns  
have substantially trumped this scalability concern: edge networks  
want fault tolerance via multihoming and administrative independence  
from their upstream ISPs, so they get their own provider-independent  
IP address blocks for their edge networks, which are indeed topology- 
independent (at least in terms of the assignment of the whole block),  
meaning practically every core router in the world will subsequently  
have to have a separate routing table entry for that edge network.   
But this only works for edge networks whose owners have sufficient  
size and clout and financial resources; we're long past the time when  
an individual could easily get his own private Class C address block  
for his own home network, like I remember doing a long time ago. :)   
So small edge networks and individual devices still have to use IP  
addresses assigned to them topologically out of some upstream  
provider's block, which means they have to change whenever the device  
moves to a different attachment point.

So in effect we've gotten ourselves in a situation where IP addresses  
are too topology-independent to provide good scalability, but too  
topology-dependent to provide real location-independence at least for  
individual devices, because of equally strong forces pulling the IP  
assignment process in both directions at once.  Hence the reason we  
desperately need locator/identity separation: so that "locators" can  
be assigned topologically so as to make routing scalable without  
having to cater to conflicting concerns about stability or location- 
independence, and so that "identifiers" can be stable and location- 
independent without having to cater to conflicting concerns about  
routing efficiency.

As far as specific forms these "locators" or "identifiers" should  
take, or specific routing protocols for the "locator" layer, or  
specific resolution or overlay routing protocols for the "identity"  
layer, I think there are a lot of pretty reasonable options; my paper  
suggested one, but there are others.

Cheers,
Bryan

merry christmas

Marc


i believe that  "Kademlia "  [ 1 ] for example and the  
technologies
mentioned in the  linked paper [ 2 ]
would fit the needs and requirements for a future proof internet.


[ 1 ] http://en.wikipedia.org/wiki/Kademlia
[ 2 ] http://pdos.csail.mit.edu/papers/uip:hotnets03.pdf
--

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf
<Prev in Thread] Current Thread [Next in Thread>