ietf-822
[Top] [All Lists]

Re: Draft for signed headers

1999-03-19 08:36:26
On Fri, Mar 19, 1999 at 12:33:09PM +0100, Brad Knowles wrote:
On Thu, Mar 18, 1999, Brad Templeton <brad(_at_)templetons(_dot_)com> wrote:

    The pgpverify scheme works now, and I submit that something similar
would continue to work just fine.  You could extend it by allowing new
keys to be distributed automatically via in-band communications channels,
and if they're properly signed by the old keys they could automatically
be added to the key ring.

If the only goal is to verify newgroup messages and the like, sure.

However, I presume we seek a means to do more than that.  To verify not
just these, but all control messages, and also to verify all postings in
moderated newsgroups, and eventually to offer the ability to verify any
posting, to prevent forgery.

If all we want is to verify newgroup, just about any scheme can work,
even one using PGP.


    I sincerely doubt that anyone is ever going to expect transit servers
to validate each and every message.  I certainly don't want to run
pgpverify (or something even heavier) on every single one of the 500k+
messages our Diablo server processes on a daily basis.  I don't mind
running something like pgpverify on control messages that
create/modify/delete newsgroups, since those operations are relatively
few and far between.

In a properly designed system that is exactly what I expect.  The CPU
load is entirely manageable today and will get lower with time as CPUs
grow faster.   Even if it weren't there are ways to deal with the problem.

It is important for transports to verify and not consumers because:
        a) You want to stop forgeries dead, as a favour to the sites that
        are not yet verifying messages.  Once major hubs verify, forgeries
        will have trouble getting very far.

        b) It saves a lot of complexity and time from clients.

        c) Because anybody who does verification also must track the
           key revocation streams and maintain a key revocation database,
           it's simply out of the question for things like dial-up clients
           to do verification
        
        d) Because you want bad messages out of your overview, you want
           them processed and the results in the overview database.

This does mean that the news server can do it and not the relay, but they
are very commonly the same.   And as noted there are reasons for relays to
do it.

This is no big whoop.  People are planning to have *routers* even verify
the security of packets in some apps.

    These machines are widely enough distributed and they should have
enough extra CPU cycles to perform this function that we don't need to
overload the central servers by asking them to look that intently at
every single message.  It's enough that the newsreader does it and that
the injecting agent checks it.

The newsreader can't do it.   It can't have access to the revocation lists,
and it doesn't help me to see an article in the menu, then download it and
have my reader say, "Sorry to waste your time, that was a forgery!"


    Any news server along the path is certainly welcome to validate the
posting if they so choose (and this fact should discourage attempts to
forge signed postings), but they should not be required to validate each
and every message that passes through them.

Nobody is required to validate, of course.  But if validation is done,
the servers are the place to do it.

Indeed, by having key hub sites validate, most other sites can goof off
about the matter.  Or use the digesting validation scheme I outlined.