3 or 4 compression algorithms is alot.
For initial standardization work, we ought to try to agree on one.
Only with VERY strong arguments should we concede to have two.

The arguments on this subject have been very persuasive. I am now inclined
to agree that only one algorithm would be sufficient -- at this time. However:

... but the focus at this stage
of standards work ought to be to develop a BASIC capability with
interoperable open systems compressed data exchange ...

This implies to me a realization that in some future time changing needs
will increase the number of needed compression algorithms. That is, today
we have the BASIC compression, and tomorrow we have CompressionProfessional
(assuming the marketeers get into the act ;-( ...).
How well do the wider character sets compress with LZU87? (is that the
right algorithm identifier?) They might well compress at a greater ratio
since their strings of repeated bits should be much longer than in ASCII.
A compromise which came to me this morning is:
- Go ahead and define COMPRESSED-<encoding> encodings.
- Define these to be a particular algorithm and either directly document the
algorithm or put a reference to the paper. Is the gzip algorithm (LZU87?)
the right one to use?
- Define an *optional* parameter, `compression-algorithm=', to be used if/when
other compression algorithms are available. If not given then the algorithm
defaults to the one defined above. The compression-algorithm names may
well become Yet Another List which IANA governs.
This should solve everyones concerns. I, for one, will be very happy with
this result.
David