It's interesting to me that the largest PKI in the world does not take
the approach of using large-transaction servers. We should go into
this with our eyes open because what we expect to be true in some cases
may in fact be beside the point. Revocation schemes that scale up to
the numbers we are discussing typically don't attempt to mirror the DNS
nor require large server farms - I'm speaking of the subset difference
algorithm and others like it. In many ways, the PKI we're discussing
is more like the DTLA than an X.509 CA.
Mark
On Mar 4, 2005, at 5:05 AM, Hallam-Baker, Phillip wrote:
On Fri, 25 Feb 2005, Hallam-Baker, Phillip wrote:
The OCSP infrastructure being deployed already injects over half a
million OCSP status values into ATLAS.
I'd be interested to know how well that would scale up by a
factor of about 1000, to half a billion.
ATLAS already handles the records for 50 million DNS names, it was
designed
to scale to at least 10 billion.
And there is absolutely no reason why everyone would have to use the
same
server. The OCSP problem has a trivial parallel decomposition.
Google and VeriSign prove every day that scale is not a barrier. The
main
difference between the systems is the priority given to reliability.
Google
have a higher volume requirement, ATLAS has a higher reliability and
robustness requirement.