I guess it would be much easier und less prone to error to just
implement transcoding of messages through iconv instead of trying
to adapt the display on a per message basis.
In general, you *can't* do a good job of using iconv to mash things between
the various iso8859-* charsets. There *will* be lossage - after all, there
is a *reason* they're up to -15, namely that one isn't sufficient. So whiche
ver
one you're in, there *will* be lossage for the other 14.
On the flip side, it's possible to do lossless conversion *from* any 8859-*
into the UTF-8 space. So teaching the code that currently does MM_CHARSET
that if the user is in a UTF-8 environ, it should use iconv to convert 8859
to utf-8 is a better solution.
Actually it is the same solution: If the user is in an UTF-8 environment,
you can't/shouldn't convert to iso8859-* anyway. The best solution is
to convert to the most powerful charset available - be it lossless or not.
I remember the gnus people using big sets of tables to do a mixture
of transcoding and unifying between character sets which led to
messages being split into several parts of different character sets,
when it didn't work correctly. I don't know what had been their reason
to not use iconv.
At least in the MULE-ized versions of Emacs and XEmacs, the basic reason for
the big sets of tables is because they're using their own internal encoding
instead of UTF-mumble (which is also why they couldn't use iconv).
I think I didn't use MULE but I guess you are right - it's a long time
since I switched to vim and nmh ...
Harald
_______________________________________________
Nmh-workers mailing list
Nmh-workers(_at_)nongnu(_dot_)org
http://lists.nongnu.org/mailman/listinfo/nmh-workers