ietf-822
[Top] [All Lists]

Re: Different approach to defining encodings

1991-11-07 14:59:44

From: Nathaniel Borenstein <nsb(_at_)thumper(_dot_)bellcore(_dot_)com>

Personally, I think that this sort of approach to encodings is a big
mistake.  First of all, it vastly increases the set of transformations a
UA may have to understand in order to just restore data to its "native"
state.  By opening up wider sets of transformations, it opens up much
wider problems of transformation definition (which uuencode?  which
ccompress?) and potential interoperability problems.  Second, it
inflicts a UNIX paradigm (pipes) and, in the versions presented, a set
of UNIX transformations (uuencode, compress) on the whole world
including non-UNIX systems.

Nathaniel, I'm sorry to hear that you so strongly oppose these
extra transformations.  I think that they could be used to fufill
a big hole in the current spec: type specific compression.

In brief, there is no single compression algorithm that applies
to all data types.  Video images may use MPEG; audio may use
G.721 encodings; text may use "compress".  We need a way to
specify these transformations on a per-body part basis.

Your fear seems to be that this will lower interoperability.  I
agree that unbridled usage of random encodings would be a bad
thing.  But how about if the new addition of a body part subtype
had to describe those transformations that it would support?  Subtypes
already have to describe the attributes that they support, and which
are optional and mandatory.  This would simply be another requirement
for defining a new subtype; since all implementers of the subtype
would have to understand the proposed transformations, interoperability
would be preserved.

        Neil