At 06:27 PM 10/24/2004 -0700, Cullen Jennings wrote:
Hmm, it is interesting if caches would be implemented or not and, more
specifically, how big they would need to be but, just for a second, let's
assume they would be implemented. I'm thinking about the following attack on
any sort of server where a MASS like scheme checks if signatures are valid.
I'm trying to think about the caching properties of the signing credentials
- obviously the selection of the namespace of the things that sign (as
opposed to the namespace of the identities in the email system) has a big
impact on how they cache.
Consider an attacker that harvested say 10^4 email address from the
cisco.com domain. The attacker then generated 10^4 fake emails from each of
these users and sent them to 10^3 email lists. This attack could be done in
a few minutes from a compromised typical web server. If each of these lists
multiplied to 10^3 different people and I make the big assumption that these
lists were well chosen on different topics such that only 10% of the people
only were on more than one of the lists.
Just to clarify, you mean "10^4 fake emails, 1 from each of these users",
right? Otherwise the math doesn't sync up for me.
If the attacker wants only to load down the key verification infrastructure,
they don't even need to harvest email addresses. They only need to send 10^4
fake emails, each with an individual signing key. Depending on the
implementation of the verifier, it may not even be necessary to generate a
This leads to 10^4*10^3*10^3*0.1 = 10^9 hits on the server over a few
minutes. This may be no big deal, it is only a magnification of 100 over the
attackers requests. However, any magnification at all is concerning.
Not sure about that last factor of 0.1. Based on your assumptions, I think it
may be 0.9 (if 10% were on more than one list, then 90% were on only one list).
But that's not right either, since we don't know how many lists that 10% were
But what's important isn't how many individual addresses receive the emails,
but how many verifiers there are. I would expect that for each instance of a
message sent to a mailing list, all the messages for a given recipient domain
would be sent once with multiple envelope-to addresses (doesn't even require
caching). The 10^3 mailing lists probably have addresses which overlap domains
as well, which is where caching is a benefit assuming that the messages hit the
same verifier in the receiving domain (sometimes they will, sometimes they
We don't have enough data to figure out whether the result is still an attack
magnification or not.
Does anyone have an idea what sort of rate of hits happens when a server
gets "slash dotted"? Clearly many servers can't survive that yet many can.
An important distinction is that it should be OK to delay messages by a matter
of minutes in such cases, either queuing the message internally or temp-failing
if checking the signature during receipt.
This attack makes me wonder if we want to consider schemes where one of two
1) signing keys are per domain instead of per user so they cache easier
IIM keys can be per domain, but what matters is the number of keys the attacker
generates for us to verify, not the number of actual keys that are authorized.
2) the signing public keys, and some way to trace back the chain of trust,
is carried with the message
The chain of trust in IIM is DNS, with obvious trust limitations. Absent
DNSSEC, I'm not sure what you would sign it with.
Perhaps there are other ways to deal with this issue or perhaps it is not a
problem at all.