ietf-dkim
[Top] [All Lists]

Re: [ietf-dkim] New Issue: review of threats-01

2006-03-19 23:24:13
Eric Rescorla wrote:
S 1.1:
Please expand the term "AU" before use in this figure.
  
Will do.
S 2.3.2:

   Bad actors in the form of rogue or unauthorized users or malware-
   infected computers can exist within the administrative unit
   corresponding to a message's origin address.  Since the submission of
   messages in this area generally occurs prior to the application of a
   message signature, DKIM is not directly effective against these bad
   actors.  Defense against these bad actors is dependent upon other
   means, such as proper use of firewalls, and mail submission agents
   that are configured to authenticate the sender.

I think this understates the risk. Because the attacker directly
controls the compromised computer, he has access to all the user's
authentication credentials and therefore firewalls and mail
submission are ineffective against this attack. The attacker
can simple emulate the user's MUA and send through the MTA.
  
This is a bad act we're not trying to address, because it does not
involve address spoofing (unless you are arguing that the attacker, by
proxying through a compromised computer, is in fact spoofing that user's
address).  In fact, I would argue that DKIM is quite effective, because
it limits the attacker to the address(es) corresponding to credentials
on that computer.  Limiting the range of signed addresses the attacker
has available is good, especially when the credentials help identify the
compromised machine.
S 3.2.

   differences in popular typefaces.  Similarly, if example2.com was
   controlled by a bad actor, the bad actor could sign messages from
   bigbank.example2.com which might also mislead some recipients.  To
   the extent that these domains are controlled by bad actors, DKIM is
   not effective against these attacks, although it could support the
   ability of reputation and/or accreditation systems to aid the user in
   identifying them.

Actually, the attacker doesn't need to control the domain he's
forging mail from, as long as the domain owner doesn't represent
that he signs his messages. Yes, there's an argument to be made
that the messages in question will not otherwise pass through
your content filters, but that needs to be explicitly stated
if that's your theory.
  
That of course is the major argument for SSP, which I should perhaps
mention here.
S 3.2.3.

   Another motivation for using a specific origin address in a message
   is to harm the reputation of another, commonly referred to as a "joe-
   job".  For example, a commercial entity might wish to harm the
   reputation of a competitor, perhaps by sending unsolicited bulk email
   on behalf of that competitor.  It is for this reason that reputation
   systems must be based on an identity that is, in practice, fairly
   reliable.

I don't buy this argument. Say that I were the owner of "example.com"
which has a strict (always sign) policy and I decide I want to spam. I
simply use some other domain (e.g., example.net) which doesn't
have that policy and send spam pointing to example.com. When people
claim that I'm the spammer I say "hey, it's not signed. must be a
Joe Job". So, given this obvious mechanism for preserving plausible
deniability, it seems to me that an attacker who wants to damage
my reputation would do exactly the same thing. So, it's not clear
how signing helps here.
  
I'm not clear on what you mean by "pointing to example.com".  If you're
saying that a human will probably confuse example.com with example.net,
that is true, but this will not be the case with automated reputation
management that will not confuse the two.
S 4.1.
It's not clear to me where the likelihoods for these attacks come
from. They seem quite speculative (and in my opinion wrong in
many cases). Rather argue about the details, I would simply
remove them. 
  
The likelihoods are speculative but actually have prompted less
disagreement than I expected. Stephen Farrell suggested that taxonomy
and I think it's useful as a broad categorization of the threats.
S 4.1.2.

   are not practical to use.  Other mechanisms, such as the use of
   dedicated hardware devices which contain the private key and perform
   the cryptographic signature operation, would be very effective in
   denying access to the private key to those without physical access to
   the device.  Such devices would almost certainly make the theft of
   the key visible, so that appropriate action (revocation of the
   corresponding public key) can be taken should that happen.

You need to be concrete in what you mean by "access" here. Yes, you
can't steal it, but you can certainly use it till the machine is 
re-secured..
  
Fair enough.  Perhaps "denying export of the public key" is more precise?

 
S 4.1.3.

   An MTA probably has are enough variables (system load, clock
   resolution, queuing delays, co-location with other equipment, etc.)
   to prevent useful observable factors from being measured accurately
   enough to be useful for a side-channel attack.  Furthermore, while
   some domains, e.g., consumer ISPs, would allow an attacker to submit
   messages for signature, with many other domains this is difficult.
   Other mechanisms, such as mailing lists hosted by the domain, might
   be paths by which an attacker might submit messages for signature,
   and should also be considered as possible vectors for side-channel

I'm not convinced by this argument. The appropriate reference here is
"Remote Timing Attacks are Practical" from USENIX Security 04.  One of
the key things to remember about side channel attacks is that enough
samples let you factor out the noise. I'm not sure why you raise the
issue this way, because there are known countermeasures to timing
attack on RSA. Rather than claim that the attack doesn't apply,
why not simply recommend using them.
  
Probably because I don't know what the countermeasures are.  Are they
also described in the USENIX paper you cite?

S 4.1.4.
   If the verifier observes body length limits when present, there is
   the potential that an attacker can make undesired content visible to
   the recipient.  The size of the appended content makes little
   difference, because it can simply be a URL reference pointing to the
   actual content.  Recipients need to use means to, at a minimum,
   identify the unsigned content in the message.

Please explain how identifying the unsigned content differently helps.
  
There are a lot of things that could be done, including requiring that
the user push a button to see the unsigned content (Thunderbird does
this sort of thing a lot), rendering the unsigned content on a colored
background (probably only very marginally helpful), rewriting embedded
URLs in the unsigned content to render them unclickable, and simply not
displaying it (invisible is different, I suppose).


4.1.12.  Falsification of Key Service Replies

   Replies from the key service may also be spoofed by a suitably
   positioned attacker.  For DNS, one such way to do this is "cache
   poisoning", in which the attacker provides unnecessary (and
   incorrect) additional information in DNS replies, which is cached.

   DNSSEC [RFC4033] is the preferred means of mitigating this threat,
   but the current uptake rate for DNSSEC is slow enough that one would
   not like to create a dependency on its deployment.  Fortunately, the
   vulnerabilities created by this attack are both localized and of
   limited duration, although records with relatively long TTL may be
   created with cache poisoning.

I'm not sure why you say that the vulnerabilities are "both localized
and of limited duration." This may be true for cache poisoning,
but it's less true for name server hijacking or impersonation.
Given that we know that spammers hijack BGP blocks, this seems like
a substantially more serious threat than you imply.
  
You're right, I over-focused on the cache poisoning threat (as many seem
to be doing lately).  I'll try to word so that the comment as a whole
applies to cache poisoning only.

Thanks for your comments.

-Jim
_______________________________________________
NOTE WELL: This list operates according to 
http://mipassoc.org/dkim/ietf-list-rules.html