ietf
[Top] [All Lists]

Re: Root Anycast

2004-05-18 15:48:10
On 18-mei-04, at 2:16, Paul Vixie wrote:

Unicast: A, E, H, L
Anycast: B, C, D, F, G, I, J, K, M (now or planned)

The thing that worries me is that apparently, there is no policy about
this whatsoever, the root operators each get to decide what they want
to do.

The table is round. Policies are discussed as a group but set individually. The result is a service which has never been "down hard", not ever, not for any millisecond out of the last 15 years. This is "strength by diversity."

I applaud diversity. The point of my message was that it would be harmful if all root servers would all act alike and therefore suffer from the same unreachability problems seen from a certain place in the net if there is an outage.

However, diversity and lack of policy aren't the same thing. If the root operators didn't communicate, it would be possible for all of them to select the same DNS server software. That would be a bad thing. The anycast issue is similar.

The fact that .org is run using only two anycast addresses also indicates
that apparently the ICANN doesn't feel the need to throw their weight
around in this area.

Apparently you have your facts wrong about how much sway ICANN had about the anycasting of .ORG, but those details aren't mine, let others speak.

I find this peculiar as ICANN imposes a heavy set of requirements on people who want new TLDs, why can't they do the same for existing ones? (This is a rhetorical question.)

Now obviously anycasting is a very useful mechanism to reduce latency
and increase capacity and robustness. However, these properties are
best served by careful design rather than organic growth.

Careful design by whom?

Anyone.

Organic compared to what?

Compared to purposeful design.

I assure you that f-root has grown by careful design.

I know the root operators are very responsible people and do a great job.

However, I'm not convinced that 13 great parts automatically make a great whole.

If we consider the number of actual servers/server clusters and the
number of root IP addresses as given, there are still many ways to skin
this cat.  One would be using 12 unicast servers and anycast just one
address.

Who is "we", though?  That's always the excluded middle of this debate.

I didn't have a specific "we" in mind, but let's make it the internet engineering community. I'm not saying we should make these decisions during the IETF plenary (or on this mailinglist for that matter) but I do believe someone has to do something. The strength of the IETF is that its results generally reflect the collective wisdom and support of the participants. (The weakness that many great things that don't have this support don't get done, but that's another issue.)

It seems to me that any design that makes the root addresses seem as
distributed around the net as possible would be optimal, as in this
case the changes of an outage triggering rerouting of a large number of root addresses is as small as possible. In order to do this, the number
of root addresses that are available within a geographic region (where
"region" < RIR region) should be limited.

In counterpoint, it seems to me that any unified design will make the system
subject to monoculture attacks or ISO-L9 capture, and that the current
system which you call "unplanned and organic" (but which is actually just
"diversity by credo") yields a stronger system overall.

I was only talking about a design for root anycasting. I'm sorry if this wasn't clear.

(Just having the roots close is of little value: good recursive servers
hone in to the one with the lowest RTT anyway, so having one close by
is enough. However, when this one fails it's important that after the
timeout, the next root address that the recursive server tries is
highly likely to be reachable, in order to avoid stacking timeout upon
timeout.

What would help overall DNS robustness would be if more DNS clients used
recursion,

This is certainly true, but I find it unfortunate that you skip over my point. It's easy to claim 100% uptime when sitting 10 meters from a server, but it's what real users see that counts. From that viewpoint, the root servers have certainly experienced service level degredations in the past 15 years. Good anycast design rather than just let it happen can minimize this in the future.

and cached what they heard (both positive and negative).  A
frightfully large (and growing) segment of the client population always
walks from the top down (I guess these are worms or viruses or whatever) and another growing/frightful segment asks the same question hundreds of times a minute and doesn't seem to care whether the response is positive or negative, only that it has to arrive so that the (lockstepped) next (same)
query can be sent.

I feel your pain, but unfortunately the IETF isn't in the position to do anything against bad protocol implementations, even the ones that aren't part of software that's illegal in most jurisdictions in the first place.

Still, this problem once again proves my point that we really need a mechanism that allows receivers to stop malicious traffic from reaching them.


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf


<Prev in Thread] Current Thread [Next in Thread>