ietf
[Top] [All Lists]

RE: e2e

2007-08-20 15:17:37
My appologies for not replying earlier.

Yes, Fred, you are entirely correct here and this was the original point that I 
raised in the plenary. David Clark is not the sort of person to propose a 
dogma. As you point out the original paper is couched in terms of 'here are 
some issues you might want to consider'.


The post you were responding to was on a secondary point where people were 
complaining that I mischaracterized 'functionality' as 'complexity'. To be 
honest it strikes me as a footling distinction without a difference, the type 
of thing that is raised when people have mistaken an engineering principle for 
an article of faith.

A more accurate precis would of course be something on the lines of "think hard 
about where you put complexity, it has often proved advantageous to keep 
application specific complexity out of the core'. We can argue over the wording 
but there is no real substitute for reading the paper.

Which is what prompted the original point I made in the plenary: when someone 
is using the end to end principle to slap down some engineering proposal they 
don't like I would at least like them to appear to have read the paper they are 
quoting as holy scripture.


There are relatively few engineering architectures that are intrinsically good 
or bad. The telephone network managed very well for almost a century with their 
architecture. The Internet has done well for a quarter century with almost the 
opposite architecture. The key is appropriateness. The telephone architecture 
was appropriate in the circumstances for which is was designed, the same for 
the Internet.

The Internet has changed and we need to recognize that there may be different 
circumstances that lead to the desirability of a different approach. My belief 
is that the most appropriate architecture for today's needs is a synthesis 
which recognizes both the need for the network core to be application neutral 
and the need for individual member networks of the Internet to exercise control 
over their own networks, both to protect the assets they place on their network 
and to protect the Internet from abuse originating from or relayed through 
their network.


-----Original Message-----
From: Fred Baker [mailto:fred(_at_)cisco(_dot_)com] 
Sent: Tuesday, August 14, 2007 5:22 PM
To: Hallam-Baker, Phillip; John Kristoff
Cc: ietf(_at_)ietf(_dot_)org
Subject: Re: e2e

On Jul 26, 2007, at 8:47 PM, Hallam-Baker, Phillip wrote:
I don't think that I am misrepresenting the paper when I 
summarize it 
as saying 'keep the complexity out of the network core'

I'm slogging through some old email, and choose to pick up on this.

Following Noel's rant (which is well written and highly 
correct), it is not well summarized that way. For example, 
quoting from the paper, "Performing a function at a low level 
may be more efficient, if the function can be performed with 
a minimum perturbation of the machinery already included in 
the low-level subsystem". So, for example, while we generally 
want retransmissions to run end to end, in an 802.11 network 
there is a clear benefit that can be gained at low cost in 
the RTS/CTS and retransmission behaviors of that system.

My precis would be: "in deciding where functionality should 
be placed, do so in the simplest, cheapest, and most reliable 
manner when considered in the context of the entire network. 
That is usually close to the edge."

Let's take a very specific algorithm. In the IP Internet, we 
do routing - BGP, OSPF, etc ad nauseum. Routing, as anyone 
who has spent much time with it will confirm, can be complex 
and results in large amounts of state maintained in the core. 
There are alternatives to doing one's routing in the core; 
consider IEEE 802.5 Source Routing for an example that 
occurred (occurs?) in thankfully-limited scopes.  
We could broadcast DNS requests throughout the Internet with 
trace- route-and-record options and have the target system 
reply using the generated source route. Or not... Sometimes, 
there is a clear case for complexity in the network, and state.

Let me mention also a different consideration, related to 
business and operational impact. Various kinds of malware 
wander around the network. One can often identify them by the 
way that they find new targets to attack - they probe for 
them using ARP scans, address scans, and port scans. We have 
some fairly simple approaches to using this against them, 
such as configuring a tunnel to a honeypot on some subset of 
the addresses on each LAN in our network (a so-called "grey 
net"), or announcing the address or domain name of our 
honeypot in a web page that we expect to be harvested. 
Honeypots, null routes announced in BGP, remediation 
networks, and grey networks are all examples of intelligence 
in the network that is *not* in the laptop it is protecting.

The end to end arguments that I am familiar with argue not 
for knee- jerk design-by-rote, but for the use of the mind. 
One wants to design systems that are relatively simple both 
to understand and to maintain, and to isolate for diagnosis. 
The arguments do not say that leaving all intelligence and 
functionality in the end system is the one true religion; 
they observe, however, that the trade-offs in the general 
case do lead one in that direction as a first intuition.


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>