ietf
[Top] [All Lists]

Re: IESG meeting thoughts

2016-05-17 08:44:47
On Tue, May 17, 2016 at 5:57 AM, Stephen Farrell 
<stephen(_dot_)farrell(_at_)cs(_dot_)tcd(_dot_)ie>
wrote:


Hi Bert,

On 17/05/16 10:41, Bert Wijnen (IETF) wrote:
Jari, would you have some more info (maybe popinters) about

    The IESG also got an update on “Cryptowars” situation from Jeff
Schiller.

When I look up "cryptowars" on google, I get many many hits on a wide
variety of "cryptowars". So I'd like to know which specific ones
were discussed or cause us specific worries.

No pointers to the talk, sorry - Jeff was kind enough to speak without
notes or slides, (which was great:-). He recounted the 1990's era
history of crypto export controls, the issues covered in their "keys
under doormats" report, [1] and some consideration of more recent
events such as the USG/Apple case and how similar issues might play
out in future. (My own not at all startling prediction: ongoing
tensions between law enforcement and all of us working to improve
security and privacy are sort of inevitable it seems.)

From my POV this was a great way to bring ADs who don't follow
the minutiae up to speed on these topics, but didn't set out any
particularly new direction for work in the IETF.

(Thanks go to Jeff for the talk and to Kathleen who organised it.)

Cheers,
S.

[1] http://115.69.40.134/files/editorials/files/MIT-CSAIL-TR-2015-026.pdf


One thing we did in the 1990s which I now regret as a mistake was that we
ended up developing security protocols where the first priority was to
defeat government intercept rather than meet users needs.

Unlawful intercept is of course a very serious problem but it is not the
only problem our users face nor is it the necessarily the most serious
concern. Nor for that matter is targeted unlawful intercept the same
problem as mass surveillance.

One consequence of treating Louis Freeh's FBI as the principal threat was
that we ended up with a lot of security protocols that were perfectly
secure but almost nobody deployed and even fewer used. end to end security
is only good security when people use it.

Right now I am working on technology that makes end-to-end security
practical and usable. Using off the shelf mail applications with the
Mathematical Mesh is actually easier than using them without. But there are
some features I have added to meet real end user needs that we would never
have considered in the 1990s. In particular a key backup and recovery
option that is turned on by default.

Why do real users need key recovery? Well without the ability to recover a
lost key, a protocol that encrypts stored data becomes worse than
ransomware. There isn't even the option of paying a criminal to get your
data back.

Another mistake we made was to consider end to end security to be the
be-all and end-all of security. I like end to end security. The Mesh is
designed as an untrusted cloud precisely because I consider end-to-end
security important. The Mesh cannot be breached because none of the data
stored in the Mesh is confidential. The only places confidential data is
not encrypted is at the endpoints.

But end to end security has serious costs. You can't have end to end
security without a key infrastructure. And until the Mesh there hasn't
really been a key infrastructure that was designed for use by non-expert
users. And end to end security isn't the 'best' security, it is only one
form of security. End to end doesn't provide protection against metadata or
traffic analysis. The Mesh has end-to-end and transport layer security. The
most critical operations even have a third layer of encryption. And I can
give a security rationale for each.

Another critical security technology that we managed to allow ourselves to
be persuaded was 'evil' is trustworthy computing. As a result the WebPKI
and code signing infrastructures use private keys that are stored on the
machine itself, in many cases in plaintext but with security through
obscurity at best. But we have the technology that would allow us to bind
those private keys to servers in such a way that they can be used but not
extracted without physical access to the machine itself and a significant
degree of technical effort.

So lets make sure we don't make the same mistake this time round. Transport
layer security and key escrow have security liabilities as well as
benefits. Trustworthy computing is a tool that we can and should be using
to make our users secure.


What is popular and commonly agreed in computing isn't always the right
thing. Security is allowing our users to control risk, not defeating the
political objectives of Louis Freeh or the RIAA.
<Prev in Thread] Current Thread [Next in Thread>