Blake Ramsdell wrote:
We should cut and paste this whole debate from the IMC Resolving
Security mailing list last year :).
Ouch! I feel like we've been through this before, many times before.
Nonetheless, I still want to respond to a number of statements here
which I feel to be inaccurate.
I'll start with Laurence Lundblade:
The US export law prevents vendors (freeware, shareware or commercial) from
meeting the user's need of strong, globally interoperable crypto.
This really needs to be clarified. The US export law prevents _US_
vendors from meeting the user's need of strong, globally interoperable
crypto. At present, there is nothing stopping a mixture of US and non-US
vendors from meeting these needs. In this scenario, the US vendors would
be barred from exporting their product. Non-US vendors, on the other
hand, would be able to supply the entire world.
This would be the case if there were any algorithm other than the 40-bit
ones listed as MUST.
I should clarify one other point: even listing at least one exportable
algorithm as MUST won't guarantee full interoperability. The remaining
issue is RSA key length. Unless something happened to the export regs
that I'm not aware of, US-export software is still restricted to 512-bit
RSA keys. Therefore, this software won't recognize signatures made with
unrestricted software, and can't be used to send encrypted messages to
unrestricted agents. I'm not sure whether this is still interoperability
or not (I'm not trying to be funny here - I think the underlying problem
is that we haven't agreed on exactly what "interoperability" means).
Paul Hoffman writes:
It's clear to me that some people who are replying on this list haven't
read the draft since they didn't know that the draft has two profiles, a
"restricted" one that has 40-bit only and an "unrestricted" one that has
both 40-bit and tripleDES. Section 2.6 of the draft clearly says when the
restricted profile should be used, and that's pretty damn rarely.
I have a few problems with this. 2.6.3 appears to give users the choice
between incurring the "risk of failed decryption" and defaulting to
RC2/40. Again, I'm not sure we can claim "interoperability" in the
I don't mean to whine, but I am quite curious why my earlier proposal
(in which the default algorithm was dependent on the length of the
recipient's RSA key) was abandoned. This proposal maximized
interoperability in the case where one of the agents was restricted, yet
prevented RC2/40 from being the default choice in the case where neither
agents were restricted.
Section 184.108.40.206 also invites protocol attacks:
If a sending agent using the unrestricted profile has not received a
set of capabilities from the intended recipient for the message, but
the sending agent has received at least one message from the
recipient, and the last message received used RC2 in CBC mode at a key
size of 40 bits (indicating that the recipient only uses the
restricted profile), the outgoing message SHOULD use RC2 in CBC mode
at a key size of 40 bits if the sending agent reasonably expects the
recipient to be able to decrypt the message.
It is not stated here how the sending agent should determine whether it
has receieved "at least one message from the recipient." Does it simply
use the e-mail address in the "From:" header field to identify the
sender? Or does it require a trusted signature as well?
Let's say the sending agent is Alice, the recipient is Bob, and the
attacker is Mallet. In the first case, all Mallet would need to do is
send mail to Alice saying "From: Bob" and encrypt it using RC2/40. Then,
when Alice sent mail to Bob, rule 220.127.116.11 would click in, and the
message would also be encrypted with RC2/40. In the second case, there's
no language in this section that would prevent replay attacks. If Bob
had ever generated a signed RC2/40 message (say, by mistake), then
Mallet could forward that message to Alice, which would cause her to use
RC2/40 in her messages to Bob until she received the next signed
non-RC2/40 message from Bob.
But the second case seems to me to be moot, because if we're requiring
trusted signatures, then Alice should be able to expect a signed (and
timestamped) symmetric algorithm capabilities field in the message, as
Finally, Steve Dusse writes:
I agree with your characterization of the issues and I will attempt to
initiate the separation of protocol from profiling if that seems like an
acceptable path to IETF involvement in S/MIME.
Pardon me for being blunt, but from a user's perspective, having a
single protocol with two different profiles is no different from having
two different protocols. Going this route means trading interoperability
for the privilege of US companies to export software and still have the
S/MIME label. I'm not arguing against this choice, just asking that it
be made clear exactly what's being proposed.
To sum up, I see two choices:
1) Include a strong, non-proprietary algorithm as MUST, and specify that
sending agents MUST usae this algorithm when nothing further is known
about the recipient's capabilities.
2) Define two profiles. Only guarantee full interoperability when the
two agents are in the same profile.
Choice (1) would be a strong, globally interoperable solution, but would
make it impossible for US vendors to export a product with the S/MIME
label. As such, it is an invitation to non-US vendors to walk away with
a significant market share. I can see why the US vendors currently in
control of S/MIME would prefer not to see this happen. However, the IETF
has a rich tradition of making design decisions based on the technical
superiority of the resulting protocols, rather than to favor the
business interests of individual companies.
It will be interesting to see how this gets resolved (this time around,