On Tue, 4 Mar 2003, Craig Hughes wrote:
There are two commercial areas where the "leave it to the end-user
humans" approach fails though:
1. Situations where someone wishes to "protect" their users from
obscene mail
How is that a problem? A clerical person can easily delete porn. Part of
the spam-control officer's job description might be to act as a porn-censor;
presumably, someone who is offered that job will be told that he/she might
see some distasteful things.
Besides, auto-detection of porn 100% of the time just isn't possible.
2. Some employees are better at making decisions that others
Right; so you hire a smart person or people to make the decision.
And if there are large numbers of such
employees (picture a giant email helpdesk or something), it's likely
impractical, or at least non-scalable, to not automate the filtering
process.
The key is to automate as much as possible/necessary, but no more.
You can delegate down to the department level, for example -- that
scales quite nicely. Or you can have different levels of scanning --
just tagging, aggressive auto-deletion, delegation to help-desk staff,
or complete end-user control. Good mail filtering software should
allow this flexibility and end-user choice.
I agree there is a strong place for human intervention in spam
filtering, but it's not always appropriate in all situations.
Ultimately, it's the only solution, since deciding whether or not a
particular e-mail is "spam" is non-computable.
Maintainance of a target of creating the smartest (and most accurate)
automatic filters we can build should endure.
Absolutely. But end users must always at least be given the *choice*
to make the final decision. If they decide to leave it up to the
filter, that's their conscious decision.
--
David.
_______________________________________________
Asrg mailing list
Asrg(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/asrg