ietf-dkim
[Top] [All Lists]

Re: [ietf-dkim] New canonicalizations

2011-05-19 12:37:43
Ian Eiloart wrote:

Levine was making a point:

The point I was making was that ever more complex ways to decide that
two mutated versions of a message are "the same" aren't likely to help
much, certainly not compared to the large cost of implementing code
even more complex than what relaxed does now.  

To determine that, we'd need a pareto analysis of breakage modes. 

In a way, he was making a general reflection of Pareto Optimality.  It 
is Pareto Efficient when one thing takes up more resources without 
hurting others. It is not Pareto Efficient when the others demand more 
from the one thing that may alter the overall system efficiency :)

Presumably lists that aren't re-signing are responsible for some of this, 
as are broken signing mechanisms. The questions remaining are, "is there 
anything left after excluding those two cases?", and "how much of that 
could be fixed easily?".

Sadly, for many, unless one see the "oops" on their own system, it 
doesn't mean much for the rest - Pareto, Prisoner's Dilemma, Chaos 
Theory, Game Theorem, Unit Operations, etc.

Back to practical reality, here is my "basic" problem with DKIM:

Why are we doing it?  Why is the same message in two different 
streams, ok in one and not the other?  How should the user be 
"trained" to see author "PDQ Public" signed here but not signed there?

I started to use isdg.net to sign my mail for the IETF related list 
and I use the l= tag

  - PASS, IETF-DKIM, DKIM-aware MLM, 3rd party resigned,
          ATPS/ACL authorized

  - FAIL, IETF-SMTP, NON-DKIM-aware MLM, 1st party signed, no body change
          except extra top <CRLF>

  - PASS, IETF DISCUSS, NON-DKIM-aware MLM, 1st party signed, adds footer.

This WG consensus says the IETF-SMTP stream is unimportant and prefer 
to pass the buck to the software on that stream.  Could be an easy fix 
by the MLM but is the developer listening? If open source, who is 
making the change?

This it is a form of Pareto Analysis when one deems DKIM efficiency is 
reached and it includes an acceptable margin of error.  However, with 
this acceptable margin of error becomes an opening for exploitable 
with overwhelming receiver abuse, it weakens the overall DKIM efficacy 
to live in this growing indeterminate DKIM world.

DKIM will need to naturally evolve to target signing to reduce the 
inconsistent results in a 1 to Many environment.

Consider this:

Based on the way DKIM has been modeled in RFC4671bis, the most 
efficient optimization (lowest overhead) would be:

   - Before any HASH computation is done,

   - Extract all the DKIM-SIGNATURES and extract all the d=signer domain

   - Check to see if signer domain is in your local WHITELIST table or
     use some protocol shim, callout, "DNS WhiteListing" whatever.

   - If no signer is found, then no need to do anything else.

   - If one signer is found, then do the valid check.

   - If more than one signer is found, well, I guess only 1 is needed.

So in the same vain we interpret an calculated invalid signature:

    "No Valid Signature Exist!"

The trust failure interpretation is to says:

    "No Trusted Signature Exist!"

Why bother with all the redundant overhead waste with hash 
calculations, DNS public key lookups when the RFC4671 End Goal is to 
get a 3rd party certified trust?  If the signer is unknown, DKIM 
authenticity has no value.

-- 
Hector Santos, CTO
http://www.santronics.com
http://santronics.blogspot.com


_______________________________________________
NOTE WELL: This list operates according to 
http://mipassoc.org/dkim/ietf-list-rules.html