On Friday, May 30, 2003 12:15 AM, Scott Nelson
[SMTP:scott(_at_)spamwolf(_dot_)com] wrote:
At 10:14 PM 5/29/03 -0400, Eric D. Williams wrote:
On Thursday, May 29, 2003 5:30 PM, Barry Shein
[SMTP:bzs(_at_)world(_dot_)std(_dot_)com]
wrote:
8<...>8
Yeah sure and I'm the King of the Gypsies...
Maybe a better way to say that is:
Since IN THEORY there MIGHT exist a spamming program
which responds to a permanent SMTP error...
How is a virus-hijacked thrall server going to remove addresses or
even report the error back?
8<...>8
Different spammers behave differently, have different software,
capabilities, and adaptability. It might be interesting to
examine some spamming software, but for it to be relevant,
you'd need to connect it to actual sent spam and that's not easy.
I do not see that as a significant hurdle that could not be created in a
laboratory setting. The interesting bit to me would/will be that in such an
environment all aspects of the process could be subject to analysis. That
analysis could also lead to a tactical or strategic advantage and/or identify
tactical or strategic weakness we may not be aware of, regardless of the
potentials if this type of analysis has not been done we can not 'know' as has
been said 'anything' more.
A much simpler way to gather the data IMO, is to take a few spam
traps and have them start rejecting with 5xx.
Then count how many RCPT TOs they get compared to other spam trap
addresses that had a comparable amount but don't reject with 5xx.
That indeed is also a good testing venue, however fundamentally different than
what I am positing. I'm certainly amenable to any viable test method and would
welcome such results but having a controlled environment with an ability to
manipulate each variable independently and gauge the 'potential' adaptability
metrics is still, I think, worthwhile.
-e
_______________________________________________
Asrg mailing list
Asrg(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/asrg