ietf-822
[Top] [All Lists]

Re: Draft for signed headers

1999-03-19 06:42:20
On Thu, Mar 18, 1999, Brad Templeton <brad(_at_)templetons(_dot_)com> wrote:

It is wrong to expect there is only going to be one certificate space,
and that's what it would take to be 'gateway proof', unless you refer
to gateways that move out of one certificate space into another and back
to the original.

    The other solution is to make gateways ignorant of certificates and
certificate spaces (they just see it as more article body content that
they don't do anything with), and to ensure that when the certificate
spaces are defined that they be as transparent as possible to any known
current or likely future gateway systems.

The whole idea of certificates is you *don't* distribute them.  You don't
fetch them from servers.   USENET can't operate as I know it today
if you need to go off tor remote servers to process a message.

    The pgpverify scheme works now, and I submit that something similar
would continue to work just fine.  You could extend it by allowing new
keys to be distributed automatically via in-band communications channels,
and if they're properly signed by the old keys they could automatically
be added to the key ring.

    You could also extend it by referencing a central key server (or have
it made available via standardized methods on your favourite web server),
and if the validating agent wishes to validate a message that has a key
it does not have locally, it could go retrieve the key interactively. 
For those systems that don't have direct access to the network for
retrieving the keys, there would need to be a mechanism whereby they
could communicate with gateways that could perform this function for them
and then forward the key via in-band or out-of-band communications methods.

    You could also use the certificate system you describe.


    The entire point of allowing for unspecified key distribution methods
and certificate spaces is that all of the above work better or worse for
certain applications, and that no one solution will necessarily be the
best in all cases.  Therefore we allow for multiple ways to solve the
same problem, depending on the specific needs of the particular
application, and it is not within our charter for this group and this
document to formally and explicitly enumerate all the possibilities
within this particular area.

The problem is that you don't want people posting messages you can't
verify, and if you post a message, you want to be sure everybody can
verify it.

    I sincerely doubt that anyone is ever going to expect transit servers
to validate each and every message.  I certainly don't want to run
pgpverify (or something even heavier) on every single one of the 500k+
messages our Diablo server processes on a daily basis.  I don't mind
running something like pgpverify on control messages that
create/modify/delete newsgroups, since those operations are relatively
few and far between.


    IMO, both the signing and validation processes are best left to the
end nodes -- the computer that one uses to compose and transmit an
article, and the computer that one uses to retrieve and read an article.

    These machines are widely enough distributed and they should have
enough extra CPU cycles to perform this function that we don't need to
overload the central servers by asking them to look that intently at
every single message.  It's enough that the newsreader does it and that
the injecting agent checks it.


    Any news server along the path is certainly welcome to validate the
posting if they so choose (and this fact should discourage attempts to
forge signed postings), but they should not be required to validate each
and every message that passes through them.

           If you let people post a message using keys and certificates
from the E-mail certificate world, you are in effect saying all USENET
sites have to understand E-mail certificates.   Or those of any other world
we will allow in.

    The e-mail model of MTAs that transmit signed articles without
validating them is one I think that would be good to emulate, at least to
this degree.  I'm not entirely sure that we would want the one true way
of signing Usenet postings to be the same heavy and verbose X.509v3
protocol that they seem to be leaning towards, however.

    I complain that one of the biggest problems on Usenet today is the
volume, and x.509v3 would certainly be a major contributor to the volume
problem.

-- 
  These are my opinions -- not to be taken as official Skynet policy
 ____________________________________________________________________
|o| Brad Knowles, <blk(_at_)skynet(_dot_)be>            Belgacom Skynet NV/SA 
|o|
|o| Systems Architect, News/mail/FTP Admin   Rue Col. Bourg, 124   |o|
|o| Phone/Fax: +32-2-706.11.11/12.49         B-1140 Brussels       |o|
|o| http://www.skynet.be                     Belgium               |o|
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
   Usenet is not the web.  Just because the web handles some things
 poorly is not a good reason to apply those same solutions to Usenet.