ietf
[Top] [All Lists]

Re: DARPA get's it right this time, takes aim at IT sacred cows

2004-03-15 06:45:01
Dear Scott,
Interesting as this matches the conclusions of our own meetings in Dec/Jan
on national vulnerability to internet.

At 03:48 14/03/04, Scott Michel wrote:
DARPA's network research direction has been somewhat aenemic over the last
couple of years, given the force protection focus and GWOT mission to
which DARPA has adapted. It's pretty easy to overreact to the DARPAtech
stuff, esp. when a PPT slide or news article says "IP is broken".

Agreed. But for a non US observer this sounds in line with the pro-IPv6
stance of DoD: obeying http://whitehouse.gov/pcipb marching orders.

IPv6. IPSec, DNS and Gateway protocol redesign (more detailed, may be,
in preparatory 2002/11/15 document)?

IP isn't broken. From a program management perspective, "IP" is merely
referring to a large number of interacting protocols, from the lowest
level physical layer to the application layer. If one reads the article
with a little more care and not as a manifesto, DARPA is interested in a
protocol suite where static (wired) networks are a special case. What
exists is a network system where the dynamic (mobile/unwired) network mgmt
is grafted onto the static network, treating the dynamic network as a
special case. DARPA wants to change the way protocols are designed, where
the network is primarily designed for dynamic nodes (and all of the
overhead that entails). I wouldn't read much more into the program
statements than that, despite the fact that controversy makes good press.

May be am I candid, but I tend to think that when a military person speaks,
it is with a purpose. And usually that telling what you exactly want is the
best way to obtain it. The article does not say they want to kill IP, but
that they want solutions. There are three possibilities to support changes:

- to fix IP
- to change IP
- to replace IP

I would read they are interested in the three of them, in parallel.

I read they have identified a need (the same as NSI said they had a need
through PathFinder) and that the ball is in the IAB and IETF's field (for a
short while if you consider how long NSI awaited before suing ICANN)

Let us hope reactions in here will be more positive than to Richard Clarke.
Or to my own December queries..

One really good example of what the program is most likely aiming toward
is the MIT RON research. It's not the IP routing protocols or the 2-tier
routing hierarchy that's broken, it's the fact that these protocols
converge so slowly to repair the network. Thus, RON is successful in the
fact that traffic can continue to get to its destination via the RON
overlay despite the routing reconvergence and the time it takes for
reconvergence. Currently, RON claims to improve reliability by orders
of magnitude rather than fixing routing protocol brokeness.

Let see the situation through military eyes. The battlefield is what it is.
For the new Cyber Forces, the internet battlefield is what is used
today. So they are interested in what is available/under serious
development - or in what worked before/aside the IP technology and
which could be deployed quick (so, most probably a total change for
a clean sheet, low cost and confidential restart).

They also want to get their own controlled solutions. The same as an
Army uses existing bridges and bridges, even poor ones. The same
it mends them if needed. And builds it own ones when necessary.

May I suggest, that these guys' priority is not really to respect RFCs,
but to protect your lifes?

The article also mentioned something along the lines of "Redesign The
Seven Layer Model!" Frankly, I've always preferred the four layer IETF
model because it didn't have the extra useless layers; but hindsight is
20/20, after all. I could look at the ceiling and foresee the session
and presentation layers suffering the death they truly deserve, but
the remaining layers staying intact. Layers may need subdividing or even
outright addition to the current model so that overlays and the recovery
semantics they provide are more explicit.

Real life is not monolithic. Fighting for one model against another is
a very strange idea. Would aggregating models, so they may have
some consistency (as the low layers do) and synergy, and picking the
one which works for each task at hands, not be a more pragmatic and
scientific approach?

SMTP is not a good example of what's wrong with IP and I'm not even sure
why COL Gibson or the other presenters even used what's arguably the most
successful Internet protocol example.

May I suggest that the example is good as an image. Datagram,
mail, file concepts are part of the "postal like" paradigm of the 60s.
This paradigm is that everyone can send you a text and you sort
them when received. You are to authentify every piece of data
you receive and can be subject to saturation deliveries (DoT, spam,
viruses). Under this paradigm you defend yourself at your gate,
like a fort. You are under enemies's fire.

You may imagine other paradigms where you protect yourself
remotely. And put the enemies under your own fire first, and protect
your communications lines.

Of course, if DARPAtech had worded their presentations with less
controversy, most participants would have yawned and said "Oh, yeah,
business as usual -- nothing interesting here." Death and complete
redesign of IP? Not likely in my lifetime 'cos "It just works, mate!"

In my own lifetime I saw three working technologies (based upon
three different paradigms) used for what the users name "internet"
today (Tymnet, OSI, Internet). Only the first one supported them
three (as well as most of the leading private ones).  And whet I
hear every where is "when you think of it, it just does not work".
So I suspect I will see one or two other ones.

Why not to have a try at:

- analyzing an extended network model where the datacoms
various models and layers are encapsulated into the physical,
operational and usage layers?

- accepting that the datacoms ecosystem needs to support many
different data and objects granularities, including the current IP
ones. And to have a try at a universal packet protocol, starting
from IP as the prevalent one, and progressively extended its
capabilities?

- working also on a progressive unification of the same layers,
across models, when possible. To maintain the funding uniity
and the consistency a distributed system needs.

jfc