Uuencode implementations also differ widely in their (in)ability to work
in a pipe, which can be a problem if you want to write an implementation
that simply pipes a body through uudecode. That's another way in which
uuencode/uudecode are "non-standard".
I would also like to put my vote in favor of retaing uuencode/uudecode.
They may not be perfect, but as already mentioned, the number of existing
users that rely on this capability is enormous. Another way to look at it
is that in many circles it already is the de-facto standard way of transporting
non-ASCII data and will not go away. It is to all of our best interest to
insure that there is a good way to work with these types of data.
It has been mentioned that the early versions were the ones most prone to
causing problems. Would it not be reasonable to make a recommendation in
the RFC strongly recommending the use of a version of uuencode/uudecode after
a certain revision number?
I agree that the SYNTAX of a cascaded content-encoding header is not
problematic. The semantics are even pretty clear, too. The
implementation, though more complex than the simple content-encoding
field, is also not beyond most programmers' abilities. What I don't
see, however, is why the cascading is necessary.
One use that I have seen is for people to send uuencoded compressed tar
images via mail (software distributions). I'm sure other similar examples
could be shown if needed.