On Jun 7, 10:06am, "David Herron" wrote:
} Subject: Re: Massive Content-Type definition ideas & Gopher
} > It's not anywhere nearly that bad. First of all, we don't need multiple
} > compression algorithms -- we need one that everyone can use freely.
} How is it you figure that?
} Different compression algorithms make sense for different data files. There
} are generic algorithms as used in compress or gzip. If you try those on
} audio or image files they don't work too well. But using an algorithm
} tuned to audio or images works fantastically (*sometimes* by losing
} information, unfortunately).
We (z-code) discussed this (internally and with Nathaniel), in relation
to asserting x-compressed-* content-transfer-encodings or the equivalent,
to support file compression in the upcoming fully MIME-compliant Z-Mail.
We came to the conclusion that, for audio or image data, any compression
is generally part of the data format and is therefore already assumed in
the appropriate subtype of the image/ and audio/ content-types, and that
applying additional compression at transfer would be self-defeating. Are
there specific examples you can cite to refute this, examples where it is
the case that applying the compression would actually make it impossible
to display the image as an object of the original content-type without
first reversing the compression?
If there are no such examples, then we really do need only one generic
compression scheme for file formats that are not inherently compressed.
Bart Schaefer Vice President, Engineering
schaefer(_at_)z-code(_dot_)com Z-Code Software