ietf-dkim
[Top] [All Lists]

Re: Additional lookups (was Re: [ietf-dkim] Re: 1368 straw-poll)

2007-02-28 06:27:02
Jim Fenton wrote:

The related attack would be as follows: A signer might deploy a brand-new signature algorithm (I'll call it N) that very few verifiers can yet handle. The attacker sees this, and starts forging messages with invalid signatures referencing the N selector, in hopes that some verifiers will accept the message on the premise that they're just not "smart enough" to verify using N yet. If this were to happen, it would introduce a problem getting signers to start using new algorithms.

Of course a solution to this is to make sure all verifiers treat signatures they can't verify (as well as those that fail verification) as if the signatures aren't there. But perhaps some of the verifiers won't do this, because they don't want to risk adverse consequences from taking action against such messages.

Don't you see why?

Failed Signature Unsigned Status Promotion (FSUSP) is the easy way out for any failed reason and it also makes it easy to exploit. FSUSP also tells bad guys,

    "Huh? There isn't a need to send signed messages fake or otherwise."
    The verifier still needs to operate in "legacy mode" and if
    supports FSUSP to the letter, its going to ignore the invalidation
    anyway."

The general concern is related to FSUSP abuse.

would require that SSP be consulted in all cases, because the attacker is able to forge a valid signature. However, I think this is adequately dealt with by the signing domain removing all selectors that reference the broken algorithm.

Again, with the idea that you wish to trigger FSUSP. It really doesn't matter how it is done. FSUSP is still the problem in my view.

I am also concerned that this is a multi-dimensional problem which may lead to some very complex policy records. In addition to the signing and hash algorithms, the body and header canonicalization algorithms are likely to invite this sort of policy treatment. This would be easier for the attacker, since unlike the hash and signature algorithms, the acceptable canonicalization algorithms aren't called out in the key record (although an extension could be defined to do that).

My overall inclination right now is that trying to distinguish a message that definitely fails SSP from a message that has an unknown SSP (because the verifier doesn't know how to verify it) is putting too fine a point on SSP. The "unknown SSP" case should just be treated as "fails SSP" and the signer should be cajoled into providing useful signatures.

I think the issue is overblown.

First, there needs to be a sense of "purpose" to even consider DKIM processing. So when we offer the switch in our mail processor setup:

   [_] Enable DKIM Verification

The F1 HELP has to provide the good reasons for this.

There are three basic outcomes with a message:

   VALID SIGNATURE
   INVALID SIGNATURE
   NO SIGNATURE

Of course, how this is handled is local policy. But INVALID status is not the same as NO SIGNATURE status. We can make it behave as DKIM-BASE, but practically it most likely will not be implemented 100% according to the spec's recommendation. There will be some "Status" given to it.

Now, we can augment it with a DOMAIN POLICY layer and for the sake of simplicity, lets call it SSP but it can be anything, including White/Black/Grey Listing or even some "REPUTATION" system. In either case, it is a "HELPER"

Without the helper, we are left with the DKIM-BASE "good needle in the haystack" mode of operaton which will create a very high DKIM processing problem (based on the fundamental premise there is always more bad mail than good coming in A.K.A "The Spam Problem - Its why we are here").

With a SSP helper, it can help eliminate the OBVIOUS problems in the unknown wide spectrum of possible reasons for failure:

  - No Mail expected from this domain
  - Signature expected, but not provided
  - Signature not expected, but it was provided

These are the OBVIOUS ones and the ones I believe will be highly benefitial and more used.

The others that we still havn't resolved:

  - 3rd party signed, but it wasn't allowed or expected

The issue related to HASH method can only be resolved two ways:

  1) The signer uses the lowest common demoninator in all signings
     including the new unknown hash method.

  2) The signer predetermines the verifier capabilities rather than
     take its chances with FSUSP issues.

Without something like this, it is a failure and as you say, it can fall into the FSUSP mode and that just breeds even more abuse and uncertainty.

Just consider if the signer has a "MUST SIGN" SSP, then FSUSP might even promote a rejection because of its new "unsigned status." I don't think this is what the authors wanted but that is exactly what can happen.

--
HLS


_______________________________________________
NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html