Barry Leiba wrote:
But I have to consider customer sites patterns with heavy facebook
users seeing tons of fb notifications and see if a simple check can
add to the optimization.
Mike has a point, but I agree that this would be a problem for large
ISPs, where adding 10% more overhead for all Facebook messages would
be something they'd want to avoid. But...
Why is computing the hash a problem? Surely, you'd only compute the
hash once, regardless of how many signatures (dups or not) the message
has.
Good point, the body hash is calculated once for the message. No
overhead there.
Now the only time there's more overhead is if the hash DOES match, but
the signature still fails to validate.
Right, in general, that is what came to mind. bh= and b= would have to
be the same.
That ought never to be true for real Facebook mail.
Right, that would be the expectation because the domain is recognized,
well known and 1st party only. There is a sense of (unverified)
"higher" trust (or lesser suspicion) with 1st party signatures from
well known domains.
But you do realize FB has a *major* social network virus/hacking
problem? Once a user is infected, all their friends get comments on
their pages and/or emails with "click this" phishing attacks:
"look at what they are saying about" click this
"Oh, this is funny!" click this
"OMG! WTF!" Click this.
"Oh, so cute!" click this
etc. The last one I got yesterday from my cousin
XXXXX sent you a message.
Subject: hi
"! C00l Vide0 !
http://www.facebook.com/xxxxxxxxxxxxxxxxxxxxx/"
That prompted me to call him to tell him about this and he said
"I know, everyone is bitching at me." So I advise him to delete his
yahoo account used for facebook and get another one from yahoo,
hotmail or whatever. I had all the power to give him one of ours but I
avoided bringing up that suggestion. :)
I joined FB for a July family reunion thing and wow, nearly once a
week a lay person gets hit and had to delete their account because all
their friends (or maybe hijacked address book on pc) were getting
slammed with these social network email, feed or wall comments with
attack propagations.
Back in the day, many people blocked other well known domains when
their easily exploited domains and abusive nature became a major
burden for users and their hosting systems. That included, msn.com,
yahoo.com, juno.com, hotmail.com, compuserv.com, aol.com and all the
other early well known domains, etc. So just like they were
eventually scrutinized more than others, obviously FB is not going to
get a free pass.
All I am saying here is I prefer to first design the software that
treats all the domains the same. It is a powerful idea to know that
FacebookMail.com is signing with 1st party valid signatures. That is a
great first step. But that also comes a danger of given users a false
impression that "everything is ok" because it was signed by:
Facebookmail.com and probably even worst if coupled with a Certified
Reputation Service vendor lookup to stamp it with a Gold Certification
Star.
(This is why I think what Facebook should be more worry about is
making sure failures are kicked out. The best way right now to do that
is ADSP. Can't do much about the good stuff, even when stamped with a
gold star)
(Of course, attackers could put in fake sigs
with valid hash values, as a form of DoS. But we've discussed that
before.)
Shouldn't that work for everything, and be very easy?
At this point Barry, it was more an observation while working on a
DKIM log stats analyzer finding subtle patterns and considering
whether its worth further analysis. Optimization is always on my mind
that might be useful for coding and operations.
The software is in auto-pilot. The original version followed concepts
in the drafts and as well concepts talked about here (or in the
original list), especially when Eric was move involved regarding
multiple signatures and how to best handle them with specific concerns
on mixed results. The most common issue was:
Why should 1 valid signature trump all other
signature failures?
Hence the ideas of resigners who modify content or specific hashed
headers to strip older signatures.
I don't recall a consideration or mentioning when duplicates were
found, and if I recall, it might have been before we added the body
hash concept in an attempt to determine which part (head or body)
broke the integrity.
In any case, with the software in auto-pilot, you are right, there is
no redundancy in body hash calculation with multi-signature messages.
So what have I learned here?
1) Support Issue
This was a FB bug and we can pass the buck and avoid the issue.
However, vendors in the market can't always afford to pass
the buck and need to deal with support issues. Smaller vendors
are more nimble and can quickly do things to help customers
faster (and directly) than larger ones. That level of support
is why I am still in business. I am presuming there is a FB
employer lurker here and will have the problem fixed. So I
am not going to worry about this.
Our goal is to minimize support cost. Not saying this was
one that raises it, but its all part of the process of
minimizing it.
2) New Dupe Checking rule for code:
If the bh= and b= (which implies h=) are the same, then
its a dupe, no need rehash b=. This means new software
coding to check for signature dupes to avoid code loops.
However, its probably note worth the extra coding. Just
it it rehash b= to find the same end result.
I can see a useful warnings or note in the logs, "Duplicate
Signatures." People like to see such things.
3) New AVS Tracking rule:
If as particular FB email notification is marked for having
SNPA (Social Network Phishing Attack) then the bh= can be
recorded for short circuiting future notifications.
This is generally only efficient in short term periods as
new similar attacks are slightly modified.
Not saying it will be done, but I try to look all things. :)
--
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html