On 18/03/2015 23:38 pm, Christoph Anton Mitterer wrote:
On Fri, 2015-03-13 at 17:41 -0700, David Leon Gil wrote:
A0. Be as secure as possible by default. Do not offer options to
fallback to doing unsafe things. "Experienced" users often think they
want them; there are usually better solutions for their use-cases.
Yes and no.
Looking at the context you come from (Yahoo) I must note, that the big
players seem to have discovered security recently ... o.O
Hmmm... so our technique is to punish people for wanting to improve.
And part of their marketing strategy seems to be "security must be
easy" (i.e. a totally unaware person should be able to be "secure").
This is basically the same what some people around the heise Verlag seem
to campaign for recently.
New to some, but it has a long history: Kerckhoffs' 6th principle:
"6. Finally, it is necessary, given the circumstances that command its
application, that the system be easy to use, requiring neither mental
strain nor the knowledge of a long series of rules to observe."
Writing about cryptography communications systems in 1883.
While this sounds like a great goal it's completely unrealistic.
Someone who doesn't at least know some basic concepts will never be
secure, because he'll believe any social engineering and the first mail
telling him to just fetch the "unknown key from website X or keyserver
Y" will be followed.
Many of my less crypto-aware friends use nowadays things like
TextSecure/etc. on their mobiles, believing they're secure.
Well first it's Android (so failed) and second, none of them knows about
the basic principle that one *is not* secure if not some form of mutual
authentication hasn't taken place via some secure path (i.e. not
first-come-is-trusted like key pinning).
I think this is simply wrong. This is no principle. TOFU has proved
itself. If you don't want to use it, that's fine, but the notion that
it's "not secure" is simply rubbish because security is a relative risks
thing, not an absolute thing, and it is extremely unlikely in almost all
cases that anyone's going to watch and dive like that. Security is
statistical, it is risk.
Empirically, I just today got my first 419 hit on Skype, which was set
up to let others see my name - their mistake on default install. After
about a decade of usage?
Even including the notion that Microsoft now copies it all to the NSA,
that's still only 4 parties with capability to connect to me: Me, my
friends, Microsoft and NSA.
To be honest, Yahoo, doesn't have the best security record, and in
general I wouldn't trust any web-based crypto app regardless of who it
That being said, I agree that it shouldn't be easy to make a well
designed crypto system insecure by settings - but not if this means that
one takes away valid functionality from the more experienced users.
Why do you prioritise "experienced users" above the lesser experienced?
Do they pay more money? Is this the Church Of BOFHs? Do they pay
their dues by helping others? From your writings it seems unlikely...
And definitely not for marketing reasons.
No such group exists in most places. In SSL there is a myth that
sysadms can understand how to configure the apache config files, so
consequently choice is good. None do, to a very high statistical
confidence - most copy files back and forth, and a few read guides like
bettercrypto.org. And I can tell you that the BetterCrypto guys are
always having arguments about what is right, best, etc. If the <1%
can't agree the 99% is screwed.
A1. Only specify a single asymmetric encryption scheme and a single
asymmetric signature scheme. This is critical: This is the second most
dangerous and buggy code in any crypto scheme.
a) What's the problem a with symmetric scheme as we have it now?
It's all old stuff. We've moved on.
b) Why should there be only one?
I think its a wrong assumption that code will become more secure by
supporting less algos/systems.
If a project cannot develop/maintain more of them securely it's rather a
sign for lack of funding/manpower.
Ah. So you see that handling more algorithms is a cost for big
companies to meet and therefore a barrier to entry which makes for less
choice and eventual balkanisation. What about opportunity cost - time
spent managing and debugging algorithms that are rarely if ever used -
could be better spent on getting more usability?
The past has shown that sooner or later every algorithm (except for OTP
of course ;) ) has its flaws and is needed to be replaced.
Quite often recently, people waited far too long till that replacement
started (just look at the issues in SSL/TLS).
Fully agreed. Now, here?
Since the experience has shown that standardisation of something new
takes quite a while (see the discussions about Curve25519 at the CFRG) I
would feel much better if a cryptosystem has several algorithms
(ciphers, signature schemes, hash algos, etc.) ready in place.
Implementations can still strongly suggest only a small subset to be
actually used - but *if* some problem is found in these, it wouldn't
take ages to get rid of them (like RC4, basically all the old CBC and
non-EtM algos in SSH, TLS,... hell we still have systems in the wild
which use MD5 for security purposes)
That argument didn't work. Basically when ever a problem was found, it
couldn't be solved by an algorithm switch, 9 times out of 10. The one
time that a problem was found, it was solved by ... a switch *backwards*
to RC4. Not exactly a happy result.
Secondly, *there is no plan to switch*. There is a switch, but no plan,
no methodology nor siren nor alert.
Algorithm agility is all standards body sophistry, not real life. It's
another paper invention thrown over the wall, and when it's actually
needed, it doesn't deliver.
A2. Clearly separate handling of various message and key metadata from
the underlying encryption scheme. This is critical: Parsing code is
the most dangerous and buggy code in any crypto scheme.
It's a bit unclear to me, what exactly you mean here.
A3. Do not specify things which are infeasible to thoroughly test.
This implies avoiding combinatorial complexity, which the OpenPGPv4
As above, I doubt you can really check every combination, and I wouldn't
want to sacrifice too much, just for being able to do so - especially
not diversity of algorithms.
When has diversity of algorithms ever bought user advantage?
I can think of (been told) one case: the "Russians GOST" requirements.
Frankly, I'm not that keen to let them do that. If *every* government
did it, then we'd be in a pickle, and we'd not be talking to any of them
at all. So why do we care about the Russians?
Why are we actually preparing a perfect argument for USA Congress to
turn around and mandate a USA weak key suite?
A4. All messages, including signed but unencrypted messages, should be
indistinguishable from random to an adversary who does not know the
public key of the signer. (This is, essentially, a Tor-style security
By "messages" you mean "OpenPGP Messages" i.e. also they keys?
B1. Only modern hash functions that provide a significant security
margin against cryptanalysis. Let's not repeat the MD-5/SHA-1
disaster. (We only need two hash functions at most.)
Disagreeing with the "We only need two hash functions at most.".
Diversity is good (of course we should only include secure algos),
especially when one expects that each algo sooner or later sees
Well, sure, on paper. But if you had a process to switch then you could
also ... use that process to switch! Why not just roll out v+1 ?
B2. Only block and stream ciphers that offer a significant margin of
safety against cryptanalysis. (Or that, when composed, offer a
significant margin of safety against cryptanalysis.)
B3.. A single AEAD mode that is maximally misuse-resistant, in the
sense of https://eprint.iacr.org/2014/793. (But probably not AEZ, or
any other CAESAR competitor for that matter. I would prefer a scheme
that uses generic composition of well-studied primitives.)
Here's my take on this. Pick the most experienced guy in the room, tell
him to come back next month with a recommendation. Done.
openpgp mailing list