ietf
[Top] [All Lists]

Re: DARPA get's it right this time, takes aim at IT sacred cows

2004-03-16 18:03:10
jfcm wrote:

At 21:45 15/03/04, Scott Michel wrote:
We identified five main (immediate/middle terms) threats (and agree with the USG they may be critical [we say "vital"]):
- DNS centralization
- IPv6 unique numbering plan
- mail usage architecture (not SMTP)
- governance confusion
- non concerted national R&Ds (starting withthe US one)

You've managed to identify operational problems, not protocol problems. The Internet's continued operation may face some serious challenges if certain trends continue, vraiment. It's not a compelling reason for a "national firewall" as you described. At the risk of being particularly crass, it sounds a lot like building the Internet equivalent of the Maginot line.

It's a noteworthy proposal, as you described it, but the management of such an entity would be hideous. Even if the management and policies of Internet operation/management could be compartmentalized as you describe, you'd still have roughly the same problems with domain names, address allocation, etc. I'm not sure I see what the advantages would be.

To be fair, it was NATO and the Allies who started in the v6 direction first. DoD is just merely keeping up with its various international partners.


hmmm. May be you did not evaluate what the worldwide control of IPv6.001 gives to who allocates the addresses (ICANN) and whould build an run an IPv6 "DNS".

A few years back, I was a co-author on a few whitepapers for customers who were wondering whether they should head down the IPv6 road because European partners were already heading down that road. I'm familiar with some of the history.

The problem is that this "agility" is to be housed somewhere. Let assume DARPA produces tangible results soon (what I is quite credible) we are not in the 80s anymore, on "ARPA Internet". We are on Global Internet, and a Global body is to publish it. This leads to ITU. And as long as ITU-I has not been created on purpose, I am afraid it is acceptable to no one.

I was waiting for the ITU to get dragged into this discussion. Yup, the same folks who brought us all of the other unsuccessful networking standards. Sorry if I'm biased here, but as it is said, history books are written by the winners. So, getting back to that discussion about ATM... :-)

We agree. But we are considering today/tomorrow warfare. The Irak action started with the first real "cyberbattle", a two days saturation spamming preparation. Exactly like before a Marines landing with artillery (what they had on the boarder too if I am right?).

Snipers coordinate through cyberspace. Soft unstabilization of Europe is hopefully right now a cyber activity, rather than a Marines one :-).

What you're referring to is not network-centric warfare at all. You're referring to a specific tactic in the psychological warfare operations side of the military. It's a important as the "kinetic response" (USAF and USN dropping bombs, the USMC landing ashore, etc.) It's merely part of attacking a national infrastructure just as much as reducing electrical power plants to rubble attacks a national infrastructure's capabilities.

Network-centric warfare has nothing to do with psyops. N-C has more to do with command and control of assets deployed to a theater, assessing and prioritizing threats, etc.

hmmm. I think you really need to read http://whitehouse.gov/pcipb. The documented priority is not only the warfighter's life, but the nation's life and way of life, in protecting critical installations and systems. SCADAs are a priority. I agree that DARPA looks also how to quickly deploy responses for urban warfare. But it is "also". All the more than cyberwarfare is a very important key to urban warfare.

DARPA and DHS (and its DHSARPA) are two separate entities with different missions. DARPA's focus is the warfighter, DHSARPA's focus is homeland security. The two missions may be integrated via a White House position paper but it takes the two agencies to execute the vision.

BTW: DARPA doesn't deploy anyone to anywhere... it does research and evaluation. The respective military service branches deploy people to places using technologies that may have been influenced by DARPA research or evaluated by DARPA (e.g., Internet, M-16s, ceramic armor, UAVs, etc.)

The model I use is only partly 3D (a cylinder figured through half a covering elipse on the top - like an open binder you would look from the cover side). This is only to show two things: - the unlimited continuity there should be at the same layers - whatever the layers may be. And to sort it (like from individual to groups)
- the fact there are common spines.

Your model still sounds much to complex for ordinary mortals to grasp. While it sounds like it should show the interactions between layers of different types and models cleanly, it would probably be sliced apart by Occam's Razor. This is why the 4- and 7-layer models work so well: they are the simplest models that suffice.

Well, sure, that makes sense, but I'm going to doubt that you're going to find an universal packet protocol that's any more universal than IP.


:-) Wich IP ?

Take your pick. Something that resembles the packet oriented system we all love and enjoy. Addressing schemes allow IPv<whatever> to grow and evolve, but the underlying philosophy behind it still remains the same.

I propose IPv6.010 numbering scheme to be defined as a universal technology+routing+addresing+sub-adressing scheme. For that reason and to validate IPv6 as a multi-numbering-plan solution. If we do not start with 2 plans, how will we be sure IPv6 protocol, softwares and equipement are multi-numbering-plan compliant?

At which point, we're back to a complete mess again when the all "stakeholders" get the different addressing schemes that make just about everyone happy. Or at least fall into the proverbial "While you can please some of the people some of the time, you can always piss off all people all of the time."

Of course, multiple addressing schemes tend to beg the question of "Why have two if one suffices?" (the usual necessary and sufficient argument, Occam's Razor again) Which leads full circle back to identifiers vs. locators. Wash. Rinse. Repeat.

Or perhaps it's only necessary and sufficient to design a universal application-level forwarding layer? (Warning: plug for my own research called FLAPPS, http://flapps.cs.ucla.edu/)

Did this research lead to working solutions? The real point is memory sharing. But universalization cannot come from peer to peer and higher, because it calls for real people to group first. But it can come from private continuity management. if you use a solution to organize your own virtual system, in addition you become compatible with any tier using the same solution.

FLAPPS stemmed from the URL-based routing and forwarding work in web caching I did a number of years ago with another advisor. It's 70,000 lines of code and will hopefully be more widely available after I graduate. Yes, the code has been demonstrated to do what it claims, but the glaring part missing is DHT emulation -- which I've tried to avoid, but will have to do in order to satisfy recurring reviewer comments.