ietf
[Top] [All Lists]

Re: Root Anycast

2004-05-18 17:27:20
Dear Måns,
your points would be well taken if we were talking of the same thing (you will note that I was very carefull quoting ICP-3). When was the last time IETF discussed the way to respond ICANN's ICP-3 call? (what I am try doing). There is a resulting difference of perspective.

- I talk of real world when you talk of the current (unsecure and overloaded?) implementation of the current DNS architecture. - I do not want to fix the DNS, I want to free me from it: it was not designed to support the current load and it is not managed in a way the Govs will ultimately accept. So, trying to patch the broken current usage to make it work as before is of no real interest. At least IMHO. [I have a user' side point of view].

The problem we face is an old and too large unique system with a robust yet overloaded engine. With all the problems which come with it. We have to split the zones of usage and risks. Either it is planned by IETF or it is done by the users. I suggest it is done together.

The novelty of Titanic were waterproof compartiment. Yet she sunk. The world has an unique DNS zone today.

Let me review the points where we fully agree and how.

On 21:34 18/05/04, Måns Nilsson said:
--On Tuesday, May 18, 2004 18:01:05 +0200 jfcm <info(_at_)utel(_dot_)net> wrote:
> 1. first target: distribution of the root machines through a root server
> matrix and core network -
Yes, this is already done. It works, even if it is not top-guided as you
envision.

I do not envision it as top guided. What we plan is a service of a few separate machines, collecting the national root matrix data (meaning: all what is national and above) and concerting towards a unique version, among themsleves and with other national equivalent agencies. (NB. this concerns DNS among many other real time distributed mutually managed directories).

Let take an example, there is a unique time in nature. Yet there are many reference clocks in the world, and that does not work badly.

Then the agreed file will be made available to 10 to 250 regional master root severs (one by local business area - probably paid by cities - to support the addition of the local data),. We do not think DNS control and machine power, we think surety, security, stablity, distributed servicing. Ultimately this means probably a national coverage of 1000 machines With the same global and national data.

> containing local root information to make it a need. Decrease of the pressure, risk containement, new data, new services.
This is unnecessary and prone to failure.

Right. But this is as unnecessary as liberty.
Prone to failure? OK, but failure on what ? Today the big failure is that we are not free to chose and to develop. The system is not RFC 883 compliant. I cannot chose my data. The root system is not IETF core value compliant, decision is not on the edges.

Anyway, you can chose the directory system you want. If you do not like it you do not use it. The only rule is (well worded in ICP-3) not to pollute others' space. The DNS root is a 15 K (compressed) file you can update every three months without losing access (but for some ccTLD like mine because ICANN does not want to enter a third/forth secondary) and RIPE does not want to respond ...

> 2. second target : a user MITM providing "hardware, software and
> brainware firewalling". At root system level it means that the user is to
> cache his root system.

Which part of the present caching resolve server does not provide this service today? Aren't you reinventing the wheel here?

:-) We do not speak of the same thing here. You only refer to the 15 K root file and to the 20 years old way to use the internet.

If you want people to change their poor habits, you can only propose them better ways for you as part of better services to them. Cache is of no use for them, it is of use for you. Let clarify what you want, and what they want. Let not just continue an old solution because you did not think of something new and better.

You want to reduce the calls to your system, right? Let stop the "cache" idea which is something of _your_ system ibn theirs, and propose them an update of _their_ system - like anti-virus updates (ever heard that anti-virus run huge 50x1G systems? And let discover what a user system can bring more to its user owner. So when the user has started using and enjoying _his_ system, you will obtain what you want.

> Private roots are not subject to DoS. They certainly permit to survive a
> few hours, days and even probably months. In adding all the root themes
> we can objectively consider today for ubiquist new services, plus a
> "first necessity" software kit and root, we are probably talking of an
> ASN.1 structure of less than 20 compacted K (comparable to anti-virus
> updates).

Private roots are subject to confusion, mis-directed micromanagement by local admins, overly sensitive to local politics, split-vision of what must by design be unified, and endless user frustration. I have tried this in a large corporate network, and it was, even there with a clear chain of command, a horrible mess. Never, ever again will I take anything like it outside a lab (except to kill it)

We are not talking of the same thing. You talk of a system where you have admins. Bloody centralization :-) Are you asking GWB everytime you want to call a friend on the phone ? This is what some people do on the DNS and we want them to stop. Calling on the rrot is like calling an operator. Do you do it often ? I is not the nightmare of creating, training and managing an Operator Corp with local branches.

How many times a year do you call an operator ? How many times a year do you think you really need to update your root file?

> The figures I discussed in a previous memo, show that we could then come
> back to a "486DX2". However discussing of root server the way we consider
> them today would be quite meaningless.

You have a strong passion for doing something to fix the DNS system. I
suggest you channel this passion towards trying to fix all the b0rkened
clients (cf. the studies of root server load refered to earlier here)
before you try to impose breakage onto the well-functioning root server
system.

I am sorry. But I have a strong passion for restoring the name space the way we created it and the way the users need it, as par of the many tools and services necessary to support networked interoperations. The DNS concept is good and can partly do it, if it is not centralized. The root server load is purely an architectural illness. That illness is due to the fear of pullution. For years IETF and ICANN saw the question as "do we want to risk pollution or do we want to risk critics from civil rights and from open roots opponents?"

The question today is "do we want to risk a total collapse of he network and a forced quick and disorganized innovation or to eventually accept that innovation is necessary and to smoothly work towards it", according to ICP-3 which calls on IETF fro experimentation ? Again IETF does not respond. Fixing the broken architecture is the only way to fix the broken clients - unless you get a cop behind every user (and it will not work because there will be too many cops plus too many clients).

Take care.
jfc


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf



<Prev in Thread] Current Thread [Next in Thread>