ietf
[Top] [All Lists]

Re: DARPA get's it right this time, takes aim at IT sacred cows

2004-03-22 14:26:51
On Thu, Mar 18, 2004 at 12:40:31PM +0100, jfcm wrote:
I am afraid you confuse layers. You can understand "firewall" as a traffic 
filter (what you obviously consider here): this would be obviously absurd 
fro what I address. You can also consider it as the appropriate protection 
for the considered layers, what we mean. If you want an exemple, look at 
Intelliwall (http://www.bee-ware.net). They are addressing the firewalling 
of applications. Traffic filtering is really the third lowest level (after 
electric and frame protection).

I'm not confusing layers at all. I'm looking at the system view as you
proposed and came to my particular conclusion. Even the certain national
entities and the control they attempt to assert still have to deal with
some of the fundamental "international" problems like address allocation,
TLD server location and replication, etc. That's why I characterized the
idea as the Internet version of the Maginot Line: It's intimidating, looks
good, but eventually gets overrun by progress and reality.

I would agree with you that cyberspace, for lack of any better term, is
an integral part of national infrastructure and needs protecting. I'm
less inclined to agree with anyone that wars take place in cyberspace,
as the concept of war goes in the taking of territory and national
assets (I'd really like to see a national entity "conquer" Google or
Amazon.com, for example.) Disruption is the most important element of
protecting the cyberspace national asset and preventing disruption 
is indeed a problem seeking a solution. But it seems as if the problem 
statement can be taken so far that it divorces itself from reality -- there
are a lot of physical systems that we tend to fall back on when cyberspace
fails. In other words, even if Amazon.com is disrupted and a portion of US
GDP goes out of commission, and given the network effects of disrupting
Amazon.com and the amplification of the network effects due to the speed
at which information can propagate ("good news travels fast, bad news
travels FTL"), it wouldn't crater the US or global economy. It would make
the punctuations in the equilibrium sharper (rift vs. crack) and the return
to a new equilibrium state longer as the system bridges or repairs the
rift. The difference today is the timeline: it's no longer millions of
years.

Your quote of Occam's Razor is great. The ENS model is a full cybernetics 
integrated model. Cybernetics has actually two successive slightly 
different understandings (I would say before and after the e-networks). 
The way Wienner, Ampere or Watts first thought of it. You would name it 
today organization "governance" (from Plato's "kubernetes", the way/art of 
steering, governing - like in Oxford/Cambridge race - steer and row - the 
proposition of McLean to the ITU meeting on governance). A top-down 
approach where the brain or a team (agora) is the leader (monarchy/athenic 
democracy). Centralized or meshed networks (ICANN, ISPs, Gateway 
Protocols).

I'm not sure I grok this paragraph, although some references would be
useful.

The second understanding, we could call generalized cybernetics, is the 
arts of efficiency in using models discovered from feedbacks (Couffignal). 
This understanding is necessary in distributed systems like the USERs' 
demanded internet, where authority is not delegated anymore (monarchy) or 
shared (democracy) but retained by each participant. Then you consider 
granularity, not hierarchy (hirearchy is  just the most simple ordained 
occurence of a granularity of decreasing importance with the distance from 
the source of authority). And then you apply the principle of subsidiarity 
(respect the functionalities of the granular organization - the 
responsblity of its own governance). This way you can keep understanding 
complexity while not being embarassed by it. Life is not democratic but is 
often  coalescent - so is the human connexion, communication, relation 
system. You do not ask your telephone to be democratic, but to work.

I understand the general idea here, but a few extra references would be
helpful to grok it (e.g., generalized cybernetics theory papers.)

I suppose that FLAPPS is a way to address that kind of need, from what I 
gather?

Actually, I never had such grand visions for FLAPPS. It's just a way of
looking at P2P infrastructures and trying to reduce the amount of
redundant effort that goes into building them. I'm not sure that a
consequence of my work is a larger contribution to addressing
philosophical or epistemological questions, but I'd rather interested in
understanding the larger issues FLAPPS might address in that context.

So you confirm "something that resemble" (we agree). Please, let us not 
confuse IP and packet switch.

I was only considering IP-the-packet-protocol -- the rest of the
program-speak is dressing to market the idea. No reason to throw out
IP-the-packet-protocol when it just works. I agree with the larger problem
of additional layers, which additional layers and models actually work
based on research and overall consensus given the tractability of the
models and interaction between protocols at various layers is the question
DARPA is asking.  "Replacing IP-the-interacting-protocols (with what?)" is
a non-question because it implies that IP-the-interacting-protocols is
inherently defective -- with which I don't agree at all.

FLAPPS stemmed from the URL-based routing and forwarding work in web 
caching I did a number of years ago with another advisor. It's 70,000 
lines of code and will hopefully be more widely available after I 
graduate. Yes, the code has been demonstrated to do what it claims, but 
the glaring part missing is DHT emulation -- which I've tried to avoid, 
but will have to do in order to satisfy recurring reviewer comments.

- DHT?

Distributed Hash Tables, of which there are many (Chord, CAN, Pastry, 
Plaxton-based systems, etc).

- question: what is a DNS based system not providing that FLAPPS provides 
? (we work on a conceptual progressive (compatible) evolution of the DNS 
towards a generalized/global service) ?

DNS resolves/provides a mapping of X to Y (name to IP address, name to
CNAME to IP address, etc.) FLAPPS isn't a resolver service but a P2P
(or tier-to-tier) routing and forwarding infrastructure. FLAPPS uses
composable names as routing identifiers, inasmuch as the URLs and URNs
can be decomposed and used as routing identifiers between cache groups.
Add some flexibility to the routing protocol so that the updates can
carry additional data, and new message forwarding behaviors are the
result (besides the usual shortest distance forwarding behavior.)

Even if I were to propose that FLAPPS were more than an application-level
routing and forwarding infrastructure, I  would have to answer the question
of where the inter-application layer is placed. Clearly (or maybe not so
clearly) FLAPPS provides an inter-application function, but does that layer
live at the same level as FLAPPS does or above?

9/11 shown the USA they could be attacked home. So you started developping 
a national fire wall system, named DHS. To protect your families, cities, 
etc. a step further. This is not Army, but this is no more private or 
community police. Most of the countries have that for a while. This 
firewalling is however not to be of the same nature (this would be a 
liberty killer : like fortifications, or as you say, a Ligne Maginot).

9/11 showed very plainly that assymetric warfare works and protecting
against assymetric warfare requires a lot more dilligence on a larger
number of fronts than previously imagined or considered. The more complex
the network and its interconnections, the harder it is to control or
monitor (take the US borders or ports as two good examples.) But that's
merely stating the patently obvious.

When you look at the threats on/from the network, you see they vary fwith 
your point of view. When you consider your machine :current firewalls are 
ok. When you consider applications Intelliwall papers explain well what 
you must consider (yet they do not fully put the emphasis about threats on 
distributed applications): this show there is a step above traffic 
filtering. Threats on groups of users (agora, VPN, Externets) are on their 
access gateways, directory structures, etc. These groups can be of 
different size (familly, corporations, universities, cities, regiions, 
nations, trades, etc.). The global threats are on the directory roots and 
on the local views of these roots (local names, anycasts): this is what we 
are at.

The more the system is controlled, the sharper the punctuations in the
equilibrium become. By breaking the system into smaller compartments,
the easier it becomes to take out the compartments no matter how well
designed the compartment's protection. But that's not all: compartmentalizing
is a constraint relationship; there are so many more of them that must
be disrupted to have an effect. The ideal would be to figure out how to
remediate the damage as it is perpetrated, i.e. self-healing systems.
In a lot of respects, IP-the-interacting-protocols is a self-healing system.

Now, wars always produced progress. While working on roots view security 
for surety reasons we obviously uncover new possibilities and possible 
innovations. This is also what interest us.

No argument there -- after all the Von Neumann architecture was a direct
result of H-bomb research.


-scooter