Brian Peterson writes:
The paper says:
"We assume in this work that each user legitimately has exactly one,
unique true (or valid) identity. An identity which does not belong to a
real user is a false identity. We further assume each user can have, or be
associated with, one or more public keys."
This is all fine. I think that the logical fallacy in the paper is
assuming that signing the same public key with two of your signing keys is
a malicious act, and should lower the trustworthiness of all your
I don't think they assume this, at least that is not how I understood
their model. You can have multiple public keys but you are supposed
to make it clear that the all belong to the same person. The main new
result in the paper is how to analyze the web of trust in that situation.
Here is an example to show why this can be complicated. Suppose Charlie
has two keys, an RSA and a DSA key. Alice certifies Charlie's RSA key
and Bob certifies Charlie's DSA key. Now Charlie signs Doris's key with
his RSA key and Ellen's key with his DSA key. In some respects the sigs
on Doris and Ellen have a common failure mode, namely that Charlie is
bad; but in other respects they have independent failure modes, that
one of Alice or Bob was bad and made up a fake Charlie key.
Dealing with these complexities in general is, according to the paper,
NP complete. They provide some heuristics. However I think a better
solution is to require users with multiple keys to fully cross certify
them, each key directly or indirectly signing all the others. Then you
can assume that either all the keys belong to that user, or none of
them do. This allows you to collapse the WoT graph by treating all the
keys belonging to a user as just one "virtual" key.
The main problem I see with their model is that they assume that you can
put a limit to the number of colluding bad users. If you assume that
there are only n bad users, then if you can find n+1 separate validation
paths to a given key, you can conclude that it is good (factoring in
the multiple-keys-per-user issue).
The two problems with this requirement are, first, that it is hard to
come up with a reasonable n value without making it very large; and
secondly, their algorithm's performance appears to deteriorate quickly
with increasing n. Their table 1 was based on a PGP keyring and as
n went from 1 to 2, the number of valid keys fell from 9992 to 77!
The lesson is that in practice there just aren't that many parallel,
independent certification paths.
Now, having said that, I must admit that all other trust models that I
am aware of have weaknesses of their own. It seems that you can't have
both security against targeted attacks, and also a trust model which
makes a large percentage of valid keys be known to be valid.
As this applies to implications for the OpenPGP Trust specification, I
think it would be reasonable to specify that multiple signatures from
different keys by the same partially trusted individual (identified by
email adreess or possibly name) would be counted as only one valid
partially trusted signature for the purposes of calculated trust.
Yes, I think that is a reasonable heuristic as part of the trust model.