On 8/7/02, JFC (Jefsey) Morfin wrote:
This is a very interesting comment. Actually what you call
"root" in here is the master file.
1. the data of this master file must be collected
2. that master file must be generated
3. it must be loaded into the alpha server
4. it must stay uncorrupted in the alpha server
5. the alpha server must stay in operations
6. it must be disseminated to the other root servers
7. it must stay uncorrupted in each server
8. the servers must stay in operation in a large number
enough (nine right now?)
9. it must be responded to resolvers
a. connectivity and delays to the resolvers must be
b. the global demand load must be match by the root server
c. all this under any circumstances: incidents, war,
catastrophe, development, new technologies
d. in ways matching 189 local national laws, governmental
e. through the evolution I suggested towards DNS2 and DNS+
The mechanisms for distributing the information can, and
should be distributed. In fact, given that virtually all IP
hosts direct their DNS queries to a local DNS server means
that this is already the case.
I do not believe there is any need to achieve five nines
availability on the capacity to add new TLDs.
High availability, fault-tolerant, updating is required of
the more popular TLDs themselves, not of the root. Unless I
am overlooking something, that solution is already feasible
without any protocol modifications.
Even for the TLDs, the availability requirements are
relatively low. The Internet could easily survive without
the ability to create new .coms for a few minutes a year.