ietf
[Top] [All Lists]

Re: Root Anycast

2004-05-19 02:21:37


--On Wednesday, May 19, 2004 01:38:21 +0200 jfcm <info(_at_)utel(_dot_)net> 
wrote:

Dear Måns,
your points would be well taken if we were talking of the same thing 

We talk about the scalability and stability of a global name-to-number
system. Between us lies some disagreement on how this should be achieved;
that is all. 

On 21:34 18/05/04, Måns Nilsson said:
--On Tuesday, May 18, 2004 18:01:05 +0200 jfcm <info(_at_)utel(_dot_)net> 
wrote:
1. first target: distribution of the root machines through a root
server matrix and core network -
Yes, this is already done. It works, even if it is not top-guided as you
envision.

I do not envision it as top guided. What we plan is a service of a few
separate machines, collecting the national root matrix data (meaning: all
what is national and above) and concerting towards a unique version,
among themsleves and with other national equivalent agencies. (NB. this
concerns DNS among many other real time distributed mutually managed
directories).

This can be done with caching. You essentially propose a system where two
things are done:

* Remove the centralism needed for speedy updates. Right now, a new root 
  zone including delegation data can be made available within 48hrs, 
  including purge of the old one. 

* Initiate a government-to-government negotiation procedure. I see only 
  delay and intervening politics here. Remember -- this is not the length
  of the Meter or the UTC we are talking about, it is names! Names that do
  to peoples irrational brains what flags and marching music used to do in
  the nation-states of the early 20th century. 

Then the agreed file will be made available to 10 to 250 regional master
root severs (one by local business area - probably paid by cities - to
support the addition of the local data),. We do not think DNS control and
machine power, we think surety, security, stablity, distributed
servicing. Ultimately this means probably a national coverage of 1000
machines  With the same global and national data.
 
How does this differ from the current anycast developments? I see, for the
anycast cases where I have spoken to root admins, that local economies
effectively sponsor an instance of one or more of the root servers, and all
is well. If the prefix is just locally announced, one gets
compartmentisation should this be desired. 

Prone to failure? OK, but failure on what ? Today the big failure is that
we are not free to chose and to develop. The system is not RFC 883
compliant. I cannot chose my data. The root system is not IETF core value
compliant, decision is not on the edges.

You can choose to run another root zone. You may mix this data with the
global version, but doing any of those is an invitation to pain (and your
email address will end up in /dev/null (a.k.a. loonybox) in a lot of
peoples email systems. The DNS system was designed (much like E.164) to be
The One system. Any attempts to break that are painful. 
 
Anyway, you can chose the directory system you want. If you do not like
it you do not use it. 

Correct. 
 
2. second target : a user MITM providing "hardware, software and
brainware firewalling". At root system level it means that the user is
to cache his root system.

Which part of the present caching resolve server does not provide this 
service today? Aren't you reinventing the wheel here?

:-) We do not speak of the same thing here. You only refer to the 15 K
root file and to the 20 years old way to use the internet.

If you want people to change their poor habits, you can only propose them
better ways for you as part of better services to them. Cache is of no
use for them, it is of use for you. Let clarify what you want, and what
they want. Let not just continue an old solution because you did not
think of something new and better.

I, on the contrary, argue that cache is good. It is a carefully balanced
engineering compromise between distributed and centralised load. Again,
instead of fixing what works (albeit with a bit too much load to be
comfortable.) I suggest fixing the cases where caching is not used, ie. 
broken resolvers and clients. 

You want to reduce the calls to your system, right? Let stop the "cache"
idea which is something of _your_ system ibn theirs, and propose them an
update of _their_ system - like anti-virus updates (ever heard that
anti-virus run huge 50x1G systems? And let discover what a user system
can bring more to its user owner. So when the user has started using and
enjoying _his_ system, you will obtain what you want.

Private roots are not subject to DoS. They certainly permit to survive
a few hours, days and even probably months. In adding all the root
themes we can objectively consider today for ubiquist new services,
plus a "first necessity" software kit and root, we are probably
talking of an ASN.1 structure of less than 20 compacted K (comparable
to anti-virus updates).

Private roots are subject to confusion, mis-directed micromanagement by 
local admins, overly sensitive to local politics, split-vision of what 
must by design be unified, and endless user frustration. I have tried
this  in a large corporate network, and it was, even there with a clear
chain of  command, a horrible mess. Never, ever again will I take
anything like it  outside a lab (except to kill it)

We are not talking of the same thing. You talk of a system where you have
admins. Bloody centralization :-) 

I have yet to see a computerised information service that does not need
maintenance by competent staff. 

Are you asking GWB everytime you want
to call a friend on the phone ? 

No, the ITU. Pest or Cholera? 

This is what some people do on the DNS
and we want them to stop. Calling on the rrot is like calling an
operator. Do you do it often ? I is not the nightmare of creating,
training and managing an Operator Corp with local branches.

You are utterly confused. One thing here is correct -- the brokenness of
certain clients and their associated malware. Let's fix that. Or have you
given up? The other thing is a synchronisation problem. That is *really*
hard; do believe me. 

How many times a year do
you think you really need to update your root file?

Twice a day, if I remember correctly. Current serial on i (the "real" i in
Stockholm, if you happen to distrust anycast.) is 2004051801, suggesting
that it was updated yesterday. 

Even if every update does not contain new data, there must be, for business
reasons, a way to reasonably quickly propagate data, with very short notice.

I am sorry. But I have a strong passion for restoring the name space the
way we created it and the way the users need it, as par of the many tools
and services necessary to support networked interoperations. The DNS
concept is good and can partly do it, if it is not centralized. The root
server load is purely an architectural illness. That illness is due to
the fear of pullution.  For years IETF and ICANN saw the question as "do
we want to risk pollution or do we want to risk critics from civil rights
and from open roots opponents?"

The question today is "do we want to risk a total collapse of he network
and a forced quick and disorganized innovation or to eventually accept
that innovation is necessary and to smoothly work towards it", according
to ICP-3 which calls on IETF fro experimentation ? Again IETF does not
respond. 

I am still very concerned about pollution. It cannot be avoided if central
control is abandoned. Period. 

Further, I think that the Internet community with the root server ops
leading the way have responded with innovation -- with anycast, supported
by traffic data, and research clearly showing where the problems are. The
problems are identified, but the solution proposed is not palatable to all
-- and cannot be. But I have strong confidence in the system, because I see
it work despite the horrifying load. And it has never, ever, been down. 

-- 
Måns Nilsson                    MN1334-RIPE
http://vvv.besserwisser.org     +46 706 81 72 04

Attachment: pgpAZWBOAMldA.pgp
Description: PGP signature

<Prev in Thread] Current Thread [Next in Thread>