ietf-822
[Top] [All Lists]

Re: checksums on open issues list

1991-11-11 14:47:01
        From:    Mark Crispin 
<mrc(_at_)marge(_dot_)cac(_dot_)washington(_dot_)edu>
        Date:    Fri, 8 Nov 1991 14:00:15 -0800 (PST)
        Subject: re: checksums on open issues list

        I believe that checksums on contents other than BASE64 are misguided
        and I fully intend to ignore them.  If you want reliable data, use
        BASE64.  The other encodings, *including* QUOTED-PRINTABLE, are not
        mechanisms for passing reliable data.

Mark, while your position certainly has some merit, it fails to point out
why.  Indeed, once you understand why, it becomes clear that your position
is just one of many possible.

Originally, I was pushing hard for a checksums to be based only a canonical
encoding of the data to be protected, viz BASE64.  As a result of some
private communication, I have realized the error of my ways.  I believe Dave
Crocker put it best when he distinguished between an end-to-end service and
a hop-to-hop service.

In PEM, end-to-end is the service being provided.  PEM was careful to
enforce a canonical encoding on the data to be checksummed for two reasons.
First, "text" is the only content type being transferred.  Second,
hetergeneous environments were assumed.  In fact, it is critical that an
originator and a recipient be able to represent a message in precisely the
same syntax in order to verify the checksum, hence a canonical form.  BASE64
probably qualifies as a lowest common denominator in terms of representable
characters and therefore is an ideal candidate for a canonical form.

My oversight (and I assert your's also) was in not realizing that XXXX
should not assume hetergeneous environments.  Indeed, there is no reason one
user should not be able to send data in a "native form" to some other user.
Of course, this assumes the "native form" makes sense to both, but even if
it does not, forcing a canonical encoding to verify the checksum is working
for nothing.  Of course, it may be necessary to use a canonical encoding to
get the "native form" past various "broken" gateways, but this is a
different problem.

I believe we are all in agreement that an end-to-end service is the most
desirable, and therefore I agree with Dave that the checksum should not be
part of the transfer encoding.  I am neutral with respect to whether it
should be part of the content type or in a separate header.

Another important point is that if the checksum is applied to the "native
form", as a message crosses an "aware" gateway, it will be possible to both
transform the message into a "new native form" and to recompute the
checksum, which I believe to be a valuable service.

One final point that I have expressed to Ned and Neil privately.  "Checksum"
is the wrong term.  The service we are describing is a data integrity
service, more precisely a message integrity check or MIC.  A checksum is one
possible mechanism by which this service could be realized.  Another
mechanism is a hash algorithm.

Jim

<Prev in Thread] Current Thread [Next in Thread>