From: Stephen Farrell
[mailto:stephen(_dot_)farrell(_at_)cs(_dot_)tcd(_dot_)ie]
Hi Phill,
Hallam-Baker, Phillip wrote:
The sequence of events hypothesized is:
1) Sender determines that the existing algorithm is deprecated
2) In response to (1) sender prepares to support an additional
signature algorithm
3) In order to support (2) sender publishes an additional
key record
for the new algorithm
4) Mallet starts sending bogus messages with forged signatures
purporting to be under the new key
5) Receivers that have not yet upgraded to support the new
algorithm are unable to determine that the messages with
forged signatures are inconsistent with the signer's policy.
I think that that is clear. However, what do you say to the
fact that anyone can produce a message with a bad signature
that does adhere to such a policy? (By looking up the policy
or copying a real example.)
The two examples you give are entirely different.
Copying a real example of a message is a replay attack. That is not a major
concern to me since it is hard to see how a replay attack provides real
leverage for Internet crime. A replay attack does not allow the sender to
propagate a virus, a phishing lure or peddle pills.
Further the argument you make with respect to replay attack holds for policy
and so is irrelevant in the context of 1368. The point at issue here is HOW to
implement policy. Raising cases that are not addressed by signature only
confuses people when we are discussing policy.
The second issue is more complex and it is the issue that we are trying to
eliminate. The attacker can look up the policy and create a message with forged
signatures that appears to comply with the policy. The point is that the
attacker can only get away with it if the receiver is unable to verify any of
the signatures that are necessary to comply with the policy.
The attacker only needs to create one forged signature to comply with 'I sign
everything' and can choose an algorithm that is known that the reciever is
unlikely to support. So advertising any key for any algorithm that is not
supported renders the policy useless.
If the policy states 'I always sign with a key in group X' where all the keys
in group X describe widely supported algorithms then the attacker can create a
message with a fake signature header but the recipient is always able to check
the signature, determine that they are fake and then treats the message as if
it is unsigned which in this case means not compatible with policy.
Given that problem I really don't see what the receiver can
confidently do with such a message that differs from handling
a totally unsigned message. Can you provide some examples?
If I am a receiver and I get a message 'from' Paypal:
Case 1: policy at Paypal is 'I always sign'
1) If any signature under the paypal.com domain is valid the message is
accepted.
2) If there is no signature whatsoever the message goes in the bit bucket
3) If there is any signature where the corresponding key record is for an
algorithm that is not understood then process the message using content
filtering
Case 2: Policy at Paypal is 'I always sign with at least A'
1) If any signature under the paypal.com domain is valid the message is
accepted.
2) If there is no signature whatsoever the message goes in the bit bucket
3) If there is no signature with A the message goes in the bit bucket
4) If there was a signature with a key record in A for an algorithm that is not
understood then process the message using content filtering.
Since Paypal will choose A from the set of 'must support' algorithms condition
4 is never met. So in Case 2 the outcome is always ACCEPT or REJECT, it is
impossible to arrive at MAYBE.
The point is that for this particular sender which is the target of a
significant proportion of phishing attacks case 1 results in a policy where an
attacker can force recourse to content filtering and in case 2 THE ATTACKER
CANNOT force this recourse.
Paypal is a sufficiently significant example to justify special coding on its
own but there are plenty of banks etc where it is possible to determine from
reputation data that strict policy processing is appropriate.
We simply don't care about the remailer issues in these cases. If paypal
employees want to send messages to such lists they can work out how to do so.
It is a non-problem. The problem here is phishing and the huge costs of
phishing spam.
I have deliberately chosen the all or nothing case here but policy still allows
bright lines to be applied. If a message is inconsistent with policy the number
of possibilities is limited to:
1) The signer has a misconfiguration
2) The receiver has a misconfiguration
3) The message is bogus
4) The message is legitimate but was modified by a remailer of some form
Now as far as I am concerned cases 1, 2 and 3 means send it to the bit bucket.
If someone has misconfigred mail service then their mail gets lost. They need
to fix it. You lose mail if your SMTP service is misconfigured, same goes for
DKIM. If you don't want that processing then you don't advertise always signs
with DKIM as policy.
Case 4 is a significant loophole but certainly not equivalent to 'anything
goes' as some claim. If someone is remailing messages to me that should only be
because I have asked them to. Otherwise its spam and should go in the bit
bucket.
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html