The image idea is nice since it is type of a Turing test, and the
image can be generated to give OCR systems trouble.
There is a research field devoted to exactly this topic. See
Garfinkle wrote a Technology Review article on the topic in June 2003,
suggesting this is the express train towards an increasingly sad and
dehumanizing future. I personally agree.
A lot of variables influence anti-spam choices, including:
effectiveness of a technique against harvesting, the relationship of
reducing harvesting vs reducing spam, cost and opportunity
cost. There's outside factors, for example the possibility that it's
all hopeless and email is generally doomed, and the possibility that
new laws might stop the big spammers on ROSKO and turn the tide.
I've found I've had best results by listening to my userbase, even
though that's depressingly led to ever more stringent anti-spam
measures. However, what works best for one group may not be best for
another, and I applaud the effort to get specific feedback from
endusers talking about their *own* desires, rather than pure
theorizing and generalizing.
The idea is to build things that use XHTML/CSS such that if certain
features aren't supported by a browser, the site does the "right
thing" instead of simply breaking, and does it without building
multiple versions with browser sniffing.
For many years, I thought the chief benefit of XML was being a side
outlet for all those people who like to muck up things, therefore
preventing them from further mucking up HTML. To some degree, I still
believe this. However, as you have already shown with mharc,
sprinkling a page with <DIV> tags can go along way towards providing
semantic markup. Again, that's getting a little off topic for
this thread, except in that it can provide a convenient basis for