--- George Gross <gmgross(_at_)nac(_dot_)net> wrote:
I think this deserves a bit more explanation, since PK crypto signing and
validation are so CPU intensive.
YMMV. Consider that the e-mail signature's "lifetime" is on the order of
Quite so. The variables are just too great to be able to extrapolate or
generalize. The bulk senders of the world that want to send customized emails
are probably the most CPU squeezed of all, while the average SME that processes
less than 20K emails per day probably has CPU to burn. The experience I see
with basic MTA installations is that simple straightforward mail installations
are more I/O intensive than CPU and it's often I/O or memory constraints to
cause systems to expand rather than CPU constraints.
Having said that, in recent years a number of the anti-spam technologies are
more CPU intensive than a typical RSA operation. Consider the cost of
statistical filtering, SpamAssassin, DCC, eg.
The reason for mentioning this is that a lot of people will want to know the
incremental cost and that is going to vary hugely compared to what other costs
they currently incur for each message.
As if that's not enough variability, there is the likely rate of adoption
against Moore's law. What costs $50K in today's money when adoption is zero,
may well only be $50/4K when adoption is high.
As was pointed out in another post, an installation that is amenable to
specialized h/w can probably achieve much greater savings per RSA than an
installation that is not so amenable.
I suspect that a table of raw CPU costs for given message sizes is probably the
best empirical data we can ever provide with some random data points of varying
Out of curiosity, was any of this ever done in the S/MIME world such we can
re-use their data or methodology?