|
Re: [ietf-dkim] Issue 1386 and downgrade attacks
2007-02-28 11:24:40
--On February 26, 2007 4:23:47 PM -0800 Douglas Otis
<dotis(_at_)mail-abuse(_dot_)org> wrote:
On Feb 26, 2007, at 2:31 PM, Eric Allman wrote:
Folks, I've been trying to understand the issues here, and I just
can't seem to wrap my head around it, which means that either (a)
there isn't actually an issue, and (b) there is and I just don't
get it. Let me try to argue for why (a) looks to be true to me.
There are three algorithms that might transition: the Signature
Algorithm, the Hash Algorithm, and the Canonicalization Algorithm.
There are more aspects related to DKIM than just signature, hash,
and canonicalization algorithms. At this point, it would be
difficult to predict which area will prove most problematic.
Assumptions about header ordering together with weak associations
may prove to be a problematic area.
Yes, it may be a problematic area, but it is completely irrelevant to
this discussion.
I'll take those one at a time. But first, let me rephrase EKR's
model slightly, since it should apply to all the cases. I'm
adding a case SN here to mean "sender does not sign at all".
I'm also ordering things a bit differently so that the expected
transition moves from the top left to the bottom right:
RA RAB RB
SN N* N* N*
SA A A X*
SAB A AB B
SB X* B B
ASSUMPTION 1: A < B strength-wise.
ASSUMPTION 2: No old algorithm A becomes "too weak" overnight,
where "too weak" means that there is a feasible exploit before a
transition can complete (that is, until no "interesting" R use
algorithm A). If, for example, someone figures out a way to crack
RSA in O(N) time, then we (and the entire rest of the net) need to
move off of it at once, and we are all hosed, and frankly DKIM
will not be the biggest problem on the net. The exception to
this is Canonicalization, since that applies only to DKIM (but
see below).
DKIM also has other unique features.
Yes, it does. But that's also irrelevant to this discussion.
ASSUMPTION 3: Attackers can not change or insert selector records
for S. If that were true then this wouldn't be a downgrade
attack, and there are lots of simpler ways to forge messages.
However it might be possible to incorporate deprecation information
into various third-party services to bolster possible server
weaknesses.
I have no idea what you are talking about here. Third parties such
as reputation servers?
CASE I: SIGNATURE ALGORITHM
Scenario: S (the sender) starts signing using both A and B, and
publishes selectors for each (i.e., we move down in the chart from
SA to SAB). As the time goes on we move from left to right in
the chart: R first checks using A, then can try either (and
presumably prefers B), then refuses to use A any more. If S
still does not implement B then we've dropped back to the SN
case. This is the O (years) transition. If an attacker can
successfully mount an attack using A during this transition time
(i.e., before S pulls all selectors using A), then Assumption 2
(that A is not "too weak") is not true, and we have bigger
problems as described above.
However, during an N year transition, a verifier is still be prone
to a downgrade attack. Such an attack might only be seen by high
value targets.
No, that's just wrong. That's the point of my posting. If you have
something valuable to say about my logic (I accept that it could be
wrong) then please point it out, but don't just keep waving the
"downgrade attack" flag. And the point about high value targets is
irrelevant. The spec is designed to work for everyone.
If R upgrades first, then we move right and then down rather than
down and then right, but the arguments remain the same.
If S only uses B, an attacker trying to use A would have to
somehow be able to create a selector for an A key, but that
violates Assumption 3.
During a long transition, the B only option will be highly
disruptive. This disruption prevents prompt prevention of
downgrade attacks, even when both signer and verifier have
upgraded without a means to assert that an algorithm has been
deprecated.
I'm tempted to say "well, duh." That's the reason why senders will
probably want to support both A and B for a fairly long period. But
there will always be some verifiers that do not upgrade, and at some
point the signers are going to drop support for A, and that will
create problems for verifiers who haven't upgraded.
If the transition is short, then I'll be concerned about it. But if
the transition is multiple years and the verifiers still haven't
upgraded I'm not going to lose any sleep over it. By the way, this
is Assumption 2.
If a signer does go from SA to SB without passing through SAB (or not
maintaining it long enough) then they will be hurting themselves,
since (Assumption 5, from my previous mail to Charles, saying that
verifiers actually implement the spec as written) their mail will be
treated like any unsigned message --- which in the long run means
"poorly".
ASSUMPTION 4: Keys for algorithms A and B are not compatible.
This is ensured by the k= tag in the selector.
There might be a case where S supports both A and B but chooses
which algorithm it uses based on knowledge of R (for example, S
always signs with B but includes or excludes A based on some
exception list). In this case an attacker might sign a message
using A and a valid selector. But this violates Assumption 2.
The level of exposure may vary depending upon target value. A high
value target may see expensive exploits at a rate higher than
generally seen elsewhere.
No, the level of exposure does not very depending on target value,
although the likelihood of an attack might. I think that's what you
are trying to say, but it's not what you did say.
The signer may need to wait a long time
before being able to drop a problematic algorithm.
Yes, that's expected, and not a bad thing. See Assumption 2.
Selectively
using a algorithm based upon the recipient would be fairly
onerous.
For the general case yes, but not for special cases.
The actual endpoint of the message is not really known,
where verifier compliance can not be easily determined.
I assume this sentence is intended to emphasize how hard it is. Yes,
for the general case it is hard --- very, very hard.
Once a
majority of users are protected by adoption of a newer algorithm
in conjunction with a deprecation assertion, this will greatly
reduce the opportunity for a successful exploit.
You've said this many times without providing evidence. That's not
helpful.
CASE II: HASH ALGORITHM
The transition is similar. If A is shown to have a feasible pre-
imaging attack, then Assumption 2 is violated and S has to stop
publishing selectors using A --- and by the way every protocol on
the net using A is also vulnerable. For example, if A == SHA-1
then DKIM isn't the biggest target out there.
S can indicate which hash algorithms it uses by using h= in the
selector record, so this case pretty much reduces to the previous
one. This implies that although h= is optional, it should always
be used once there are any deprecated hash algorithms --- not an
onerous requirement.
The level of expense needed to stage an attack may limit the number
of sites exploited. The exploit overhead might change some
assumptions when compromised systems are focused into attacking
specific targets, for example. This might have a significant
payoff when these messages are otherwise trustworthy.
I have no idea what you are trying to say.
CASE III: CANONICALIZATION ALGORITHM
This one is somewhat different from the others, since there is no
way for S to communicate to R which canonicalization algorithms it
uses. However, since the DKIM-Signature header field is included
in the signature and that field includes the canonicalization
algorithm, then for an attacker to change a message would be
equivalent to either I or II.
There are already changes coming that might wreck havoc with DKIM
canonicalization. What changes might be required to thwart
weaknesses created when EAI headers are adopted? There are also
known weaknesses with respect to signature/header association.
These weaknesses may be an inability to ensure a verifier is not
also exposed to DDoS exploit without also foregoing DKIM
protections.
Not relevant to this discussion.
CASE IV: SIGNATURE AND HASH ALGORITHMS SIMULTANEOUSLY
There may be a case that would be problematic. Suppose both the
signature and hash algorithms were changing at once. We have
signature algorithms A and B, and hash algorithms H and J. Let us
further suppose that for some reason
AH << AJ < BH < BJ
In particular, AH is an unacceptable ("too weak") algorithm
combination, but any of the other combinations are strong enough
(the ordering between them is irrelevant). In this case there is
no way for S to tell R not to use AH, and just has to rely on all
the Rs out there to be smart about it. I suspect this isn't a
terrible burden on receivers if this understood when the software
is upgraded (that is, when the RABHJ code is installed). If it's
not understood then there is a problem. To fix this rather
unlikely case would require a policy lookup on every message and a
policy language rich enough to express the combinations. My take
is that it is unlikely enough that not handling it is a valid
engineering tradeoff.
By adding a "Please use X" to the AH algorithm within the key or
signature would not require a rich language or additional
transactions. If X is not available, then AH would be invalid.
This simple statement prevents a downgrade attack, which is
especially important when verifiers understand X.
It's not at all clear to me what "X" is. Assuming it is the name of
another algorithm (AJ, BH, or BJ) then your proposal doesn't work.
An attacker isn't going to include a signature that says "by the way,
I'm a bad algorithm", and the selector corresponding to A can't say
that it should not be used because it can be used fine with J.
EPILOGUE
Much of this relies a lot on Assumption 2, and perhaps that's what
actually being discussed in this straw-poll thread. But I think
I've argued for why that's a good assumption. In summary, this
why I think we should just close 1386.
Fire away.
Rather than assuming nothing unexpected will be discovered in
coming years, plan for a graceful transition now. This may
greatly increase the protection DKIM offers in the face of
adversity. This involves rather simple definitions that can
remained unused until a problem becomes evident.
When did I assume that nothing unexpected will be discovered? On the
contrary. If I believed that I would have argued against specifying
the algorithms at all. I *do* make Assumption 2 that there won't be
a major breakthrough that makes an existing, believed-strong
algorithm into something that is trivial to attack overnight, and I
state that very clearly. As I argued before, if Assumption 2 is
wrong then the DKIM problem will be the least of our worries. If
SHA-1 turns out to have a trivial pre-imaging attack then the banking
industry is toast. If RSA turns out to be a P rather than an NP
problem, then security as we know it on the net is gone. Hence, in
the context of DKIM, Assumption 2 is valid.
eric
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html
|
|