ietf-openpgp
[Top] [All Lists]

Re: Question and note

1998-06-29 02:09:27
On Mon, 29 Jun 1998, Uri Blumenthal wrote:

dontspam-tzeruch(_at_)ceddec(_dot_)com says:
But my self-sig can have X in it, and might be a self-sig using X.  So if
I create an El-Gamal signature using MD2, and send it to the public
keyservers, what happens?  I don't know if every keyserver will take 5.0
keys yet, nor if every impelmentation will fail gracefully.

So...? If you want your sig to be universally accepted, you'll not
use MD2, and probably not PGP-5 yet. On the other hand, if you
need or want the features of PGP-5 (or MD2), you'll knowingly
sacrifice some accessibility...

Why are you wasting time proving that water is wet?

Because all these terms have no meaning to the end user, and probably not
to many of the implementors.  It bothers me that the specification is
cluttered with lots of algorithms that don't actually offer any benefit
perceiveable to anyone but a cryptographer, and then only at the margin. 
Bloat in the implementation is bad.  Bloat in the specification is worse.

I would rather delete every assigned algorithm number that isn't likely to
be implemented for public consumption than to have those show up, one per
implementation so no two implementations can talk except with RSA/IDEA/MD5
or DH/DSA/SHA1/3DES since my code will be much smaller if I just
implement these two subsets exactly.  And then there will be no reason to
have any other algorithm, so why bother putting them into the OpenPGP
spec?  If we have them as official, they should be limited in scope so
that it is possible to produce a full implementation.

This is happening with TLS - no one does TLS.  Everyone does SSL2 or SSL3
compelete with impelementation bugs that need to be compensated for so
they are rarely true SSL2 or SSL3.  So there are three "definitions" of
the protocol and lots of small implementation nits that must be coded for.

If there is an algorithm number assigned, it would be useful if there was
a specific reason to use it and a likelyhood it would appear in many
implementations.  In this, I agree with PRZ that having extra algorithms
simply for the sake of having extra algorithms is not a good idea.

Going through this process should create more interoperability and fewer
implementation problems, not more.

The OpenPGP spec is still ambiguous on some points and contradictory in
others.  I think the trouble is minimal if people limit their packets
specifically designed to interoperate with foreign implementations and
keyservers to avoid MAYs since this is where most of the problems occur. 

So I am given a choice between suggesting we hold off until the
ambiguities and contradictions are recursively resolved, or suggesting the
posting of sign akin to "Here there be Tigers", i.e. don't go there
without good reason. 

As is, it appears that what might happen is that what is still ambiguous
will be excised.  But "Can't use" is necessary in the absence of strong
language saying "Don't Use".  

SHOULD NOT implement is not the same as SHOULD NOT be used for
publication.  

Look: this "should not" idea of yours is crap. Accept it and get
back to doing something useful.

I was correcting a misunderstanding in the original reply.  I was not
saying that MAY algorithms SHOULD NOT be implemented (which is how it was
taken).  I was saying that MAY algorithms (as they currently exist) are
likely to create interoperability problems, and since the PKI isn't
addressed directly in the spec I wanted to head off some problems (or we
could put the thing on hold for another 3 months while we define exactly
what will go on with the keyservers).

Just as there is a distinction between a residential street and
a freeway,

The car I drive does not know that distinction. Neither should PGP.

So why not just forget PGP altogether and go with S/MIME?  Maybe you take
your Corvette on logging trails, and try to hit 1G cornering in your Jeep.
Cars are nonsentient, but they still obey the laws of physics.

If you admit that different vehicles might be designed for different local
conditions, why is it hard to believe that different algorithms or
implementation levels might be designed for different usages? 

I don't want to see keys with preferred algorithms of 100 on keyservers,
nor do I want to see any with MAY algorithms on the preferred list of any 
key submitted to a server.

.....there should be a distinction between algorithms used to
interoperate with nonPGP things (like Fortrezza or X509) or an
experimental or specific local purpose and those used to publish things
like keys, or for interoperability without foreknowledge. 

MUST defines the minimum necessary for interoperability. SHOULD defines
"you might not have it if you really have a bloody good reason..."  MAY
defines things that you may have if you choose, but you're blameless if
you don't.

But are you culpable if you do [implement a MAY] and get it wrong?  It is
hard enough getting the MUST and SHOULDs right.  How many implementations
have you tested your version of OpenPGP against?  Did they all get
everything right?  How do you know yours is correct?  If it is difficult
to get the details right now, adding more of them won't make it easier.

I'm against it. As a matter of fact, I'd like to propose to ADD some
more, that AES competition will "unearth". Surely we'll want to AT
THE VERY LEAST support the AES winner itself? And for the sake
of interoperability with Fortezza might we not include SKIPJACK?
Just two examples...

Except you haven't really given any examples. 

Does "AES" mean anything to you? Some candidates (quite good, BTW) 
were already published, the rest will be published shortly. There
will be a winner, as I hope you understand. So, RC6 is not an
example worth your attention? TwoFish? LOKI97? Or do I have
to waste my time spelling out every candidate, one of who
will become "the" AES? And of course you don't think any of
the above should be in the MAY category, let alone SHOULD?

So should we delete DES/SK and reassign it as "whatever AES is", or
reserve the next number as "whatever AES is"?

There is already a category for undefined algorithms.  I don't have my
copy of AC here, but I think some of these have multiple variants, or
might be adjusted from their current reference implementations before
being adopted as DES' S-boxes were changed.  So if you define a number for
X now, and AES is X+ do we assign a new, different number?  What if there
are already messages using X if we want to keep the number?

Is SKIPJACK not qualified to be an "example" of a new encryption algorithm 
that one MAY implement?

That is what the experimental numbers 100-110 are for.  You can implement
anything you want in these slots and I won't care.  If you want to assign
a low number to "SKIPJACK" and say nothing more I do have objections.  If
you have a reference implementation for anything you want to add, please
give a URL, state whether it should be done in PGP-CFB, raw, or some other
variant, and a proposed algorithm number.  Then we can start the
discussion. 

Do you mean the raw algorithms, or the cfb-reset-after-10 that PGP uses?

Do you interoperate with raw algorithms, or with cfb-reset-after-10?
Isn't it bloody obvious that an algorithm definition must be specific
enough to be unambiguously implementable (and make sense at the same
time, so ECB mode isn't likely to cut it, for example)?

Do you really have this much free time on your hands to ask such questions?

Well, IDEA, 3DES, CAST, Blowfish, and SAFER/SK-128 are all specified
clearly in other documents and PGP doesn't implement them directly, but
puts them through the PGP-CFB code (and does not say that these must do
this) and the specification says nothing about any new algorithm.  This is
an ambiguity in the spec.

You keep proposing adding algorithms by some title.  I have code that can
implement it either raw, or via the PGP-cfb reset or even by other means. 
I can do that now (and have done it with RC2 and RC4).  And there are
other modes.  Which do I do?  I have two switch statements, one for
PGP-CFB and one for raw.  For consistency, I should go through PGP-CFB. 
But to be absolutely true to a spec I should go with raw. 

Or are you suggesting that you will only get specific enough long after
the spec is out and everyone has a different idea of what is meant to be
implemented by the name and number?

What decides, whoever is first-to-implement?
  ^^^^ I hope it will be "who" that decides, but in your case...

Well, when I refer to an RFC or other fixed standard, it is a what and not
a who.

The implementor, s****d. In your implementation - you do. So you
must implement the MUST ones, you should implement the SHOULD 
ones, and the rest is entirely up to you. What is there NOT
to undertand?!

Now DO YOU get it??

So if you implement DES/SK as raw, using algorithm ID 6, it is my option
to implement a PGP-CFB DES/SK also using algorithm ID 6.

Lest you think this irrelevant, Blowfish ALREADY RAN INTO THIS PROBLEM.  I
implemented it as 128bit/16 rounds in CFB mode, but another implementor
was first and used a 192 bit key as I remember.  Mine won and the only
reason was because I clarified the specification first and not for any
technical reason.

I already have many more algorithms I can implement instantly since they
are already contained within SSLeay.  I have avoided suggesting them
because I don't want the number of algorithms to grow.

There are EXPERIMENTAL algorithm IDs and there are MAY algorithm IDs.  Not
all the MAY algorithms are specified in enough detail for anyone to
implement unambiguously (those that exist as code somewhere). 

Instead you are saying it is up to each individual implementor to use his
own judgement or that there should be some free-for-all to decide which
variant of each MAY algorithm will win.

No it is not. The natural selection will "wither" some of the defined
algorithms and make others "blossom". But it is absolutely not up to
you to guesstimate which ones will "live" and which ones will "die".

I don't have to guesstimate.  All I have to do is implement first and get
the widest usage.  For a new algorithm I might have the first working
implementation, and then *my* implmementation will be copied - modes and
other details and all (because I can verify messages and produce a test
suite).  So I might even kill an algorithm by doing a bad implementation
that gets widely adopted but rarely used. 

If you are going to let every implementation use whatever combination it
wants, the only way of limiting it is to remove algorithms.

Pure unadulerated bulshit. Since when did we enter a "single-Party"
system... Interoperability is ensured by (a) defining the common
set of algorithms that is always present and (b) describing all
the algorithms correctly and unambiguously.

The "single-Party" is the Open PGP specification itself.

(a) fails because this spec won't interoperate with earlier versions of
PGP at this level almost inviting one or more of the PGP-2.6.x plus
implmementations to form another branch.  There are good reasons not to
have the 2.6.x things as a must, but if OpenPGP turns into bloatware at
the specification stage and a lot of MAYs will be shifted to SHOULDs in
the next draft it invites going back to a simpler starting point.

(b) is even worse since you can't describe what doesn't yet exist, and
much of what is left is still ambiguous.  The only reason HAVAL is the 5
pass, 160 bit variant is that I actually went out and tried to implement
it and needed it to be more specific.  There are many more things like
this where I simply implemented the way I thought best.

How many of the corrections and clarifications in the specification have
you done?  You seem to see it as a completed mathematical proof.  I see it
as something with contradictions and assumptions (but far less than many
others), and not a single existing correct implementation.

A second algorithm protects against a point-failure.  3des and cast is
better than either alone.  A list of 12 is much worse unless you measure
code's value by the KLOC.  Cryptography is stronger when it is simpler. 
More algorithms complicate everything. 

X509 has variants, and PKCS#12 has more.  And they don't interoperate
(MSIE rejecting some certs, Netscape rejecting others, neither really
conforming at some points).  We already have one cryptographic
implementation nightmare.  Why are you advocating that PGP become as bad,
if not worse. 

Now may we all return to solving the real problems at hand?  Please?

Like having an available reference implementation of each algorithm you
are suggesting and maybe a sample PGP message using it?

I have done the implementation, and have run into the problems.  I have
time to ask questions which you think silly now because it will take a lot
more time later to try to resolve why one "Open PGP" can't decrypt the
messages from another "Open PGP" when both don't do anything contradicted 
by the spec.

--- reply to tzeruch - at - ceddec - dot - com ---


<Prev in Thread] Current Thread [Next in Thread>