ietf-dkim
[Top] [All Lists]

Re: [ietf-dkim] Proposal for specifying syntax and semantics for multiple signatures

2006-04-04 19:08:04

On Apr 4, 2006, at 10:14 AM, Stephen Farrell wrote:
Douglas Otis wrote:
On Apr 4, 2006, at 8:44 AM, Dave Crocker wrote:
Douglas Otis wrote:
Sorry, I still don't understand what the purpose or impact of this attack is. Can you explain?

An attack may be enabled by replaying a message compromised due to a weak hash, key, or canonicalization algorithm.

You didn't answer his question (or, by derivation, mine.)

DKIM can establish a trust relationship between the signing-domain and the recipient. Being able to exploit that trust relationship can be used to both defraud the recipient, and damage the trust that might have been established by the signing-domain. If there is an exploit that becomes a problem, both parties should be able to quickly upgrade and find protection.

The message may have been a message a financial institution asking to check the account and offering a helpful login link. The recipient might trust this link when lead to understand this domain signs their messages and that their MDA/MUA places non- compliant messages into their spam folder.

Nor can I see what this has to do with removing one of a bunch of signatures.

Conventional wisdom and practical experience suggests it is not readily practical to exploit the SHA-1 hash, at least not across major portions of a message. Whether this weakness permits the alternation of a few characters within a message is unknown. Even a small modification could permit exploits to take advantage of the trust established. With email, bad actors may also have access to a corpus of valid messages, which might also alter the level of protection achieved. Protocol semantics should assume failure scenarios. There are possible failure modalities within the key algorithm and the canonicalization, although this is not to suggest these algorithms are readily exploitable today. Simply imagine a future exploit is within the realm of possibilities.

While DKIM's great strength is the scalability of cryptographic algorithms requiring little interaction, protection does not extend to the message routing information. Of course the ability to replay messages is a major risk factor. Replay allows a single exploit to be magnified. It is not practical to filter on the signature, as messages sent to a list may generate tens of thousands of messages with the same signature and From header. : (

Also imagine in a few years messages will not be accepted by major ISPs without a DKIM signature. By then every MUA and MTA handles messages according to their signing-domain. At some point, people will once again consider links in their financial institution's messages to be safe to click. While an expectation of trust is the goal of DKIM, trust also creates a target.

Imagine hidden domain names in links can be altered using a new inflection technique based upon a lossy property of the hash. (This is only speculation.)

As reports of this exploit become known, financial institutions wishing to retain trust adopt a stronger function, but must place two signatures on their message to ensure acceptance. Atlas their customers will not benefit from this improvement until all messages using the new algorithm are accepted and all messages using the older algorithm are refused.

---
To bridge the gap between exploit and universal transitioning to the new algorithm, there must be a means to determine whether a bad actor has issued a message sans the stronger algorithm. Mark keys primary/ secondary. Accept no message with only the secondary signature/key. If only the secondary algorithm is known by the verifier, security can be increased by also checking that the sender offers keys for this unknown algorithm.
---

Paul's idea seems more complex and is dubiously fragile. The weaker algorithm must encompass the stronger unknown algorithm. It makes no sense to have the stronger unknown algorithm encompass the weaker signature. This strategy makes an assumption with respect to the nature of the exploit. Using primary/secondary flags and consistent algorithm designations ensures a quick and orderly repair always remains practical. That property does not seem assured by the scheme suggested by Paul.

-Doug
_______________________________________________
NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html

<Prev in Thread] Current Thread [Next in Thread>