nmh-workers
[Top] [All Lists]

Re: [Nmh-workers] hardcoding en_US.UTF-8 in test cases

2013-02-06 19:22:55
I noticed that en_US.UTF-8 appears in hardcoded form in the test cases.
Knowing that my system doesn't have it, I tried running the test cases
and, sure enough:

Unable to convert string "â??nÌ?"
test/scan/test-scan-multibyte: 59: test: Illegal number: 
test/scan/test-scan-multibyte: 63: test: Illegal number: 
Unsupported width for UTF-8 test string: 

Huh, okay ... forgive my US-centricism, but at least on every system
here we have tons of locales installed so I figured that was the case
everywhere, and I picked something I figured was pretty universal.  May
I ask how come you have a Finnish locale but not a US one?  This isn't
some harsh feelings left over from the War of 1812, is it? :-)

For whatever reason, pick/test-pick seems to be fine regardless.

Hm, I'm surprised, but test-scan-multibyte actually does some kind of funky
stuff (well, getcwidth, actually), so I guess that's the reason.

I'm not sure how to make that as portable as possible but as a start,
perhaps trying existing LANG, LC_* values, the output of locale -a (|
sed 's/utf8/UTF-8/') or, if there is no locale command, the contents of
/usr/lib/locale. And, perhaps fallback to plain guessing. It seems
getcwidth can be used to test them out. It might be wise to give
preference to C.UTF-8 and then en_.*

FWIW, none of the systems I have access to (even the Linux ones) have a
C.UTF-8 locale.  It's not clear to me how standardized those locale names
are.

--Ken

_______________________________________________
Nmh-workers mailing list
Nmh-workers(_at_)nongnu(_dot_)org
https://lists.nongnu.org/mailman/listinfo/nmh-workers
<Prev in Thread] Current Thread [Next in Thread>