ietf
[Top] [All Lists]

Re: DARPA get's it right this time, takes aim at IT sacred cows

2004-03-16 03:42:53
At 21:45 15/03/04, Scott Michel wrote:
jfcm wrote:
Interesting as this matches the conclusions of our own meetings in Dec/Jan
on national vulnerability to internet.

Sounds like the internet is a threat, not a tool. (Ok, I know you're not a native English speaker, but it was hard to resist.)

It sounds right. The threats are the weaknesses of the tool. And their impact on cirtical resources. Let assume the DNS "stops" (attacked, polluted, hijacked, disputed): what is the impact on the nation?

We identified five main (immediate/middle terms) threats (and agree with the USG they may be critical [we say "vital"]):
- DNS centralization
- IPv6 unique numbering plan
- mail usage architecture (not SMTP)
- governance confusion
- non concerted national R&Ds (starting withthe US one)

This lead to the notion of "national firewall". Not the same architecture and layers as a local firewall, but an equivalent mission. One can consider global, users groups and local firewalling. Global means to prevent destructive access to the nets (like sending spammers to jail, making sure the root is stable, secure and safe, reaction to PathFinder like attempts). Users Groups (nations, regions/states, cities, structures, coporations etc.) mean addressing, directory, access portall protection. Local is what we are used to (but here port and traffic filtering should be less and less the main issue and call for more sophisticated tools).


Agreed. But for a non US observer this sounds in line with the pro-IPv6
stance of DoD: obeying http://whitehouse.gov/pcipb marching orders.

To be fair, it was NATO and the Allies who started in the v6 direction first. DoD is just merely keeping up with its various international partners.

hmmm. May be you did not evaluate what the worldwide control of IPv6.001 gives to who allocates the addresses (ICANN) and whould build an run an IPv6 "DNS".

May be am I candid, but I tend to think that when a military person speaks,
it is with a purpose. And usually that telling what you exactly want is the
best way to obtain it. The article does not say they want to kill IP, but
that they want solutions. There are three possibilities to support changes:
- to fix IP
- to change IP
- to replace IP

Generally speaking, military officers do speak with a purpose in mind, but I disagree that the thrust you're enumerating. "Fix IP" is probably true, "Change IP" fits with "Fix IP", but "Replace IP" is patently untrue. After all, the DoD spent a lot of time and money on ATM in the mid and late 90's only to fall back to IP. ATM was an abject failure.

I will not comment on ATM. "Fix IP" means in my mind to keep IPv4/IPv6 and modifiy some features. "Change IP" means to look at a new IPvX. "Replace IP" means another kind of protocol/paradigm. I do not talk abuot what DARPA may do, but what we have as options.

I read they have identified a need (the same as NSI said they had a need
through PathFinder) and that the ball is in the IAB and IETF's field (for a
short while if you consider how long NSI awaited before suing ICANN)

I'm not sure I agree with this at all -- the research community is much more agile than the IETF and IAB, so it's more likely that the IETF will play catch-up as the DARPA reearch produces tangible results.

I think we agree there. I say the ball is in IAB/IETF field, and that they have to move fast, or DARPA and many others will take the lead.

The problem is that this "agility" is to be housed somewhere. Let assume DARPA produces tangible results soon (what I is quite credible) we are not in the 80s anymore, on "ARPA Internet". We are on Global Internet, and a Global body is to publish it. This leads to ITU. And as long as ITU-I has not been created on purpose, I am afraid it is acceptable to no one.

Let see the situation through military eyes. The battlefield is what it is.
For the new Cyber Forces, the internet battlefield is what is used
today. So they are interested in what is available/under serious
development - or in what worked before/aside the IP technology and
which could be deployed quick (so, most probably a total change for
a clean sheet, low cost and confidential restart).

The emphasis is and has been "network centric warfare". The current DARPA director is interested in a good mix of solutions that can be deployed in the immediate, near and future terms. "Deployed" as in "deployed out in the field with the warfighter" (on the back of a US Marine.)

We agree. But we are considering today/tomorrow warfare. The Irak action started with the first real "cyberbattle", a two days saturation spamming preparation. Exactly like before a Marines landing with artillery (what they had on the boarder too if I am right?).

Snipers coordinate through cyberspace. Soft unstabilization of Europe is hopefully right now a cyber activity, rather than a Marines one :-).

Throwing away the current IP infrastructure or completely redesigning the protocols would be one of those way off in the future projects and has very little chance for success (refer to the DoD and ATM as a good example.)

Throwing away current IP would be stupid. Because it is here. But we all know that IP cannot support most of what we miss today (in most of the cases this is the reason why we miss them).

Your army combat engineer example is a fairly decent metaphor for what the proposed programs want, but throwing away IP is not a tenable solution.

May I suggest, that these guys' priority is not really to respect RFCs,
but to protect your lifes?

Protecting the warfighter's life, actually.

hmmm. I think you really need to read http://whitehouse.gov/pcipb. The documented priority is not only the warfighter's life, but the nation's life and way of life, in protecting critical installations and systems. SCADAs are a priority. I agree that DARPA looks also how to quickly deploy responses for urban warfare. But it is "also". All the more than cyberwarfare is a very important key to urban warfare.

Real life is not monolithic. Fighting for one model against another is
a very strange idea. Would aggregating models, so they may have
some consistency (as the low layers do) and synergy, and picking the
one which works for each task at hands, not be a more pragmatic and
scientific approach?

No, real life is not monolithic, but invariant models like the 4- and 7-layer models describe real systems and relationships. This was true a few minutes ago when last I looked at mathematics and physics, and their success at describing real world phenomena.

I'm not saying the model is complete, just like physics is still looking for its TOE model. But the model isn't completely wrong, either. Augmenting the 4- and 7-layer models with what's been learned is a substanstive effort that will produce results, but somewhere, someone has to propose what's missing and what needs to be added.

Full agreement. My model - somewhat validated over the last 20 years - is not "augmented" but aggregating (and augmented thourgh operation and usage layers/parts). If the 4, 7 or any other model work for aspecific need, it is just fine, as long as all the models are permitted to interact.

May I suggest that the example is good as an image.

SMTP's problems are a better example of what's needed, not of what's wrong. I'd vehemently argue that SMTP isn't broken because it works. If one is going to replace SMTP, the replacement had better do what SMTP now does. SMTP does show its age from its origins in batch-mode processing days, but there comes a point at which "plus ca change, plus c'est la meme chose" when applied to new solutions.

Full agreement. You were objectng that SMTP was not a good image: I just responded it was a good icon of what basically is flawed - ie the "open postal" concept which permits spoofing to be a global conceptual component.

Now, I do not only agree about SMTP, but I think it suffered to be too much enhanced. You were objected about "batch processing" days. I understand what you mean. OK, the "mail" concept was first used to report on the success of a "batch" (script). And I do not really understand what would be wrong about "batch" - this is only a sophisticated repetitve or one shot command. An SQL request is a batch. Interactive commands only is irrealistic.

To come back to SMTP, you are right. Degrade SMTP to its minimum (full compatibility with existing deployment) and use it only as an ubiquitous signaling transport system in a store and retrieve architecture (memory sharing). You then have a good and fully compaitible transition to the next network paradigm.

Why not to have a try at:
- analyzing an extended network model where the datacoms
various models and layers are encapsulated into the physical,
operational and usage layers?

So long as it stays in 3 dimensions, which is what most people can tractably handle. If it can't be easily visualized, it's not going to be successful. Also keep in mind that humans aren't particularly good at drawing 3d diagrams on paper (I'd argue that's one of the successes of the 2d layer diagrams.)

Full agreement.

The model I use is only partly 3D (a cylinder figured through half a covering elipse on the top - like an open binder you would look from the cover side). This is only to show two things: - the unlimited continuity there should be at the same layers - whatever the layers may be. And to sort it (like from individual to groups)
- the fact there are common spines.

The 2d layer diagrams are good as snapshots. My "binder" parabole pemits to show they are only snapshots which are to fit together. And that they have some elements/constraints in common.

- accepting that the datacoms ecosystem needs to support many
different data and objects granularities, including the current IP
ones. And to have a try at a universal packet protocol, starting
from IP as the prevalent one, and progressively extended its
capabilities?

Well, sure, that makes sense, but I'm going to doubt that you're going to find an universal packet protocol that's any more universal than IP.

:-) Wich IP ?

The point is not in defending IP or not. But in taking advantage from accumulated experience, demands etc. and to have a cleen sheet review. If the result is IP, just fine. But I doubt it. May be not far from it, but I doubt it, because IP is datagram only.

As one other responder said, there is a need to accomodate different addressing styles that separate identity from location. I agree with the sentiment. So, [erhaps it is only necessary and sufficient to extend or redefine IP's addressing?

I fully agree with this. I propose IPv6.010 numbering scheme to be defined as a universal technology+routing+addresing+sub-adressing scheme. For that reason and to validate IPv6 as a multi-numbering-plan solution. If we do not start with 2 plans, how will we be sure IPv6 protocol, softwares and equipement are multi-numbering-plan compliant?

Or perhaps it's only necessary and sufficient to design a universal application-level forwarding layer? (Warning: plug for my own research called FLAPPS, http://flapps.cs.ucla.edu/)

Very interesting. Right in what I currently consider :-)

1. fits in what I name interapplication layer. This is also memory sharing, what is (IMHO) one of the ways to the future. Coupled with SMPT or SMS, or whatever you like for signal/script command sending. Please note that this is datacoms model independent. Replace IP by X.25: your model is unafected, if I read it correctly ?

2. this is an attempt to a generalization of the "peer to peer" concept: I use "tier and tier" (3&3). Like in the street or on the phone. There is no other prerequiste than to be layer compatible.

Did this research lead to working solutions? The real point is memory sharing. But universalization cannot come from peer to peer and higher, because it calls for real people to group first. But it can come from private continuity management. if you use a solution to organize your own virtual system, in addition you become compatible with any tier using the same solution.

Thank you.
jfc