http://www.stat.psu.edu/~lsimon/stat250/sp00/solutions/diagtests.htm - gives an
example.
http://www.medal.org/ch39.html - gives def's (under anchor links).
-e
On Tuesday, April 01, 2003 9:17 AM, Jon Kyme
[SMTP:jrk(_at_)merseymail(_dot_)com] wrote:
Correction - apologies.
I wonder if it might be useful to adopt the widely used ideas of
specificity and sensitivity.
so if
TP = number of emails you want to be marked as spam that actually are so
marked
and
FP = number of emails marked as spam that shouldn't have been
and
FN = number of emails not marked as spam that should have been
specificity = TP/(TP + FP)
sensitivity = TP/(TP + FN)
(I think I've got that right - I'm sure there are lot's of references out
there)
These measures are commonly used for comparing diagnostic tests in
clinical
applications.
Oops, no, I don't think that's right - it should be:
Specificity = TN/(TN+FP)
positive predictive value = TP/TP+FP
It's been a very long time since I used this part of my brain.
Isn't this all used in computer vision and other AI stuff?
Has anyone got a good reference?
--
_______________________________________________
Asrg mailing list
Asrg(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/asrg
_______________________________________________
Asrg mailing list
Asrg(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/asrg