dkim-ops
[Top] [All Lists]

Re: [dkim-ops] hammering with a soldering iron, was subdomain vs. cousin domain

2010-09-13 17:00:27
This message is hard to answer because my reply might be read wrong. I 
will try though.

Murray S. Kucherawy wrote:

If anything Murray, traceability - verifiers and assessors would know
who is the responsible signer and it isn't the principle author domain.

Has that been shown yet to be important at receivers?  Are 
there any current implementations with data to show that this 
is a useful thing to track?

I'm happy to believe that it is important, but so far all I've 
seen is a lot of argument over theory and not much real data.

What more data do you need about who is the signer? In this particular 
fiber, you described a DNS provisioning regarding authorizing a 3rd 
party signer creating a 1st party signature (5322.from == DKIM.d). 
According the 4871, the Signer is the "responsible" domain and 4871bis 
is trying very hard to break the author domain association.


In the advent of this anticipated new reputation scoring market, it
would be the primary domain at risk - not the passive 3PS service.
The 3PS domain is protected from harm while collecting the bucks. :)

Is that theory, or is that data?

Actually engineering intuition was all that is needed. If data or more 
engineering intuition helps, Fenton has this to say:

   http://blogs.cisco.com/news/comments/growth_in_dkim_signing_continues

    As DKIM deployment grows, so does the ability to use signatures as
    the basis for domain-based reputation.  This is a double-
    edged sword:  a good reputation can enhance deliverability, but
    domains that send (and sign) messages that are considered
    undesirable by recipients can quickly tarnish that reputation and
    have much the opposite effect

Dkim-reputation.org has an extensive writing on the negatives as well.

    http://www.dkim-reputation.org/mission/

    With DKIM reputations I can stop known DKIM spammers,
    what?s the advantage for good DKIM senders?

    There is only an advantage for good senders if their mail
    gets higher scores (i.e. more reliable delivery rates),
    precondition is (a) a valid DKIM signature (b) no hit in
    the DKIM reputation database. We think you don?t do
    anything wrong if you give a slightly better score to
    good DKIM senders, but we warn to increase this value too
    much. We think about adding positive reputation to some
    senders that we detected to be good senders. The
    important point is: although there are some whitelisted
    senders we must check their mail traffic to react on
    dynamical bad behaviour.


    False-Positives hurt in the DKIM-Reputation Project, what can
    be done against?

    The fact that DKIM identities have a longer lifetime
    across several messages leads to more negative impact of
    erroneously black listed but actually not spamming
    senders. In our system the reputation curve of a spamming
    address is linear and continuously goes back to ?neutral?
    after 100 days. For this period of time we keep the
    referred spam mail as a proof in our database. If you
    contact us for a detailed examination of a spam hit we
    can review and re-rate a spam hit. At time our traffic is
    low so we don?t need an automated process for this.

    Important: the bad thing about blacklists is copying one
    to another. We emphasize that revocation of reputation
    can only work properly if the data is synchronized
    frequently with our data source.

Do you think this is indeterminate enough?

I don't think the goal is to de-emphasize policy.  I suspect lots 
of us would like to see that capability.  But the systems we're 
using today have limitations and entrenched history we can't 
simply ignore for the sake of convenience.  Moreover, it would 
be completely silly to roll out a standards track policy 
specification before there's experimental data to back up the 
notion that it will work.

RFC 4871 was considered by many fast tracked and rubber stamped 
without any real supportive data.  We all know why too, but it 
certainly wasn't for the betterment for network wide adoption.  It had 
very limited scope with a morphing of out of scope unproven wide 
reputation model, including allowing list operations to diminish the 
proof of concepts with Policy.

In essence we're practically busting at the seams to deliver 
information to verifiers when we don't know if that information 
is even a little bit interesting to them.

Well, thats because we began to break some engineering principles. 
You got to first work out the deterministic aspect of the model first 
before you go into unknowns. The basic framework. You can't have 
conflicts in the semantics. You can't have an author take over a WG 
item that he doesn't believe in.  You can't begin with iffy iffy 
unknown guidelines that promote "SIGN, SIGN, SIGN and figure out the 
rest tomorrow."

The way I see is how in 1987 we known that SMTP had security issues, 
namely with the return path and we promised a solution. But it never 
came because the mindset was that it wasn't feasible enough of a 
problem. It was written in stone in RFC821 and carried over today:

    This specification does not further address the authentication issues
    associated with SMTP other than to advocate that useful functionality
    not be disabled in the hope of providing some small margin of
    protection against an ignorant user who is trying to fake mail.

I ask Klensin change change this for 2821bis:

    This specification does not further address the authentication issues
    associated with SMTP other than to advocate that useful functionality
    not be disabled in the hope of providing some small margin of
    protection against a user who is trying to fake mail.

The word IGNORANT was removed! Wonderful! :) But its still carries on 
a naive mindset about the reality.  It isn't a small margin - but a 
big margin to do something about it.

So we allowed the relaxed provisions to endure and we ended up with a 
mess.  I can understand it back then. But we have 25+ years of 
millions of man-years to have enough engineering intuition and 
understanding that DKIM started out good but took a bad turn to once 
again another relaxed protocol.

The question are we going to wait 2, 5 or even 10 more years before 
the next generation realize "Those old geezers in 2000s created a DKIM 
FOOBAR!"

We don't need more DATA to see there is a problem and a wide spread of 
indeterminate situations.  Engineering intuition tells us we need a 
solid deterministic backbone design and protocol protection layer for 
DKIM again, and let the rest be based on that.

-- 
Hector Santos, CTO
http://www.santronics.com
http://santronics.blogspot.com


_______________________________________________
dkim-ops mailing list
dkim-ops(_at_)mipassoc(_dot_)org
http://mipassoc.org/mailman/listinfo/dkim-ops

<Prev in Thread] Current Thread [Next in Thread>