At 05:02 24/11/02, Michael Froomkin - U.Miami School of Law wrote:
> The issue is less the size of the file than the problem of updating many
> copies of it reliably. The root server operators find it a challenge to
> assure that even the modestly sized root zone file is correctly distributed
> to all root servers accurately and in a timely fashion.
Are there statistics on this? Certainly the published info I've seen is
more of the patting-self-on-back variety.
This is why the only long term viable solution is to get the root file used
by a root server - or small group of root servers - asynchronously rebuilt
by its operator from the very autroritative data of the TLD Managers, and
to have them mutually crosschecked for consistency among root servers
systems. Obviously this means to consider the Internet as a distributed
network of cooperating - or even concerting - (instead of coordinated)
systems, probably not what IETF and ICANN share as a network subsidiairity
culture as yet. Maye be what an appropriate analysis of the requirments for
a real and stable global security may change?