ietf-smime
[Top] [All Lists]

Re: Weakening the rigid heirarchical trust model

1998-01-01 15:46:33
On 31 Dec 1997, EKR wrote:

-> Ed Gerck <egerck(_at_)laser(_dot_)cps(_dot_)softex(_dot_)br> writes:
-> > The bottom line is that S/MIME aims at a security level which is not
-> > critical, while it can be perfectly operational for the bulk use of the
-> > Internet today. A honest and correct appraisal of this fact must be more
-> > useful than snake-oil, here too. 
-> I absolutely agree. I was simply observing that it's quite possible
-> to create a profile of S/MIME that is usable in a high security
-> environment.
-> 

First of all, a Happy New Year!

Ok, agreed. But that environment (and protocol) would not be S/MIME any
longer -- otherwise, the non-interoperability specter would appear again.

-> >[snip, already requoted] 
-> > My point was that a protocol weakness cannot be solved by hardware, no
-> > matter how controlled.
-> Yes, but a lot of the things you were complaining about (e.g.
-> bad randomness for key generation,  programs which are willing
-> to give up the key, etc.) are (1) not protocol bugs but
-> implementation bugs and (2) can be fixed by trusted hardware.
-> 

We seem to be moving in circles here because this is NOT what I had in
mind. The difference in understanding may have been caused by myself, when
I poorly explained my objection (out of a desire to be concise). My
objection was NOT to point out that S/MIME has protocol bugs, but I seeked
to define its security limits so that we may consider S/MIME working
within its design assumptions (for example, the fact that you can't carry
an elephant in your Porsche doesn't make the Porsche a bad transportation
vehicle, just shows its designs limits). 

Going back to the original thread, this consideration led me to justify
the use of self-signed certs in S/MIME as coherent with the need to use
self-signed CA root certs and with the desired security level in S/MIME. 

Thus, what I also observed was simply that you can't push S/MIME beyond
the specs and there would no gain in pushing for stricter procedures than
needed. To exemplify, I provided examples of application behaviors which
are NOT defined in the S/MIME specs and which may cause problems in a
higher-security environment as a function of the implementation, even
though the application is 100% S/MIME compliant and perfectly acceptable
in a lower-security environment. 

Therefore, these security concerns cannot be considered as implementation
bugs of a spec -- because the specs are moot on that. Rather, they reflect
implementation freedom allowed by the specs. Of course, if you get an
out-of-the-box S/MIME application you may rightly expect it to conform to
the S/MIME standard -- targeting a pre-defined security level for which
the concerns I exemplified are simply non-existent. 

Now, if we agree that the problems I exemplified (out of a larger list) 
are not in the S/MIME wish-list then we must also agree that they are
neither protocol bugs nor implementation bugs. They are just features that
someone may need but will NOT find in S/MIME -- by design.

To further clarify the issue (and following your numbers above):

(1) No, the examples I supplied reflect neither protocol bugs nor
implementation bugs in S/MIME, because they belong to a security class
beyond the S/MIME specs. In such a higher-security reference frame, they
may be called protocol weaknesses in S/MIME and just reflect the fact that
you are using a tool beyond its design assumptions. This does not impair
the use of S/MIME for the bulk of Internet e-mail messages, for which it
is obviously designed. 

(2) Such protocol weaknesses cannot be solved by a "band-aid" type
solution with some sort of hardware that carries some amount of trust,
because a system is usually as strong as its weakest part. In the open
environment of the Internet, it is very risky to consider that the
solution of a protocol weakness would be guaranteed by a hardware
limitation. 

-> > -> That said, the issue of trapdoors is a red herring. Ultimately
-> > -> you have to trust your vendor not to intentionally compromise your
-> > -> security--or do it all yourself. Read "Reflections On Trusting Trust"
-> > -> for an example that makes this point quite clearly.
-> > 
-> > First, "not intentionally compromised" is difficult to disprove,
-> > especially when you bear the burden of proof in a tamper-proof device, no?
-> > (Further, as you may remember from the case of Stac vs Microsoft, Stac
-> > proved by reverse assembly of MS's code that DoubleSpace used substantial
-> > code from Stac, but Stac was also found guilty of "invading" MS's code.)
-> >
-> > Second, as above, no vendors or third-parties can be trusted in today's
-> > competitive environment-- not even Volkswagen vis-a-vis General Motors. 
-> > What the international Internet community needs is not some immaterial
-> > trust on a foreign government or company or, even, on a domestic
-> > government or company -- but open knowledge and fail-safe procedures. 
-> > After all, if trust would be such a safe "blank check" then the whole
-> > issue of key-escrow, TTP legislation and CAs would be moot. 
-> I think you're missing the point here. The vast majority of users
-> of security critical software are using software supplied by
-> others, even if those others are in-house. If those suppliers can't
-> be trusted not to intentionally compromise your security, there
-> are very few measures that you can take to really protect yourself.
-> 
-> > BTW, "Trusting Trust" reflects the wrong assumption that trust is
-> > transitive -- which it is not.
->
-> From this, I infer that you haven't actually read "Reflections on
-> Trusting Trust", but are just guessing what it's about based
-> on the title. Am I right?
->

No ;-)

I read the article some time ago and enjoyed it, because even though I
entirely agree with its line of thought I also entirely disagree with its
main conclusion: "You can't trust code that you did not totally create
yourself." (But, discussing this would certainly be off-topic -- however,
I would gladly explain my reasonings privately though). 
 
-> Certainly your comments don't speak at all to the point that the
-> paper is trying to make. (Hint: It's not about cryptography at
-> all.)
-> 

One of the reasons I enjoyed the paper is its title, because the title is
an anti-climax to the paper! 

However, when you refered the paper (for the curious: 
http://www1.acm.org:81/classics/sep95/) you apparently did it to support a
take-it-or-leave-it attitude: either you would have to trust the vendor or
do it all by yourself. The first being undesirable and risky (example, MS
vs Stac. GM vs VW, etc.)  and the second obviously impossible, we would
have to accept the first option. Such was your implict conclusion.

My comment was that indeed the title is wrong (what Ken also states in the
paper) but trust can also be modelled as "soft-trust" -- which offers a
way out of this paradox when one allows for multiple independent trust
channels. 

Cheers,

Ed

______________________________________________________________________
Dr.rer.nat. E. Gerck                     
egerck(_at_)novaware(_dot_)cps(_dot_)softex(_dot_)br
http://novaware.cps.softex.br


<Prev in Thread] Current Thread [Next in Thread>