ietf-asrg
[Top] [All Lists]

Re: [Asrg] Countering Botnets to Reduce Spam

2012-12-13 23:45:24
On 12-12-13 11:49 PM, Adam Sobieski wrote:
Internet Research Task Force,
Anti-Spam Research Group,

I have an idea to defend computers from botnets to reduce spam. On a
computer security topic, what do you think about the idea of utilizing
one or more P2P DHT's and the hashes of each file, or each important
file, on computers? Based upon the hardware specifications, platform,
compiler, and compiler version, the hashes of compiled item or
downloaded binary items can be compared to the hashes of the files on
other Linux servers. That is an example of how P2P technologies can
enhance Linux servers.

Obviously, it can't be all files.  Otherwise, all computers would be
identical ;-)  Then you have to consider all the versions of the code.
Then the highly idiosyncratic mix of other software versions that may be
on the machine in unusual places.  Etc.

This is more-or-less a distributed version of tripwire (which dates back
to the 1980's, IIRC, introduced in an early edition of Gene Spafford's
UNIX Security O'Reilly book), or, for somewhat newer stuff, consider
rkhunter & "rkhunter --propupd".

I worked for a company in the mid 80's that did this in a
semi-distributed fashion.

While I'm not intimately familiar with all versions of Linux spamware,
you have the following considerations:

- As far as I am aware, Linux spam compromises relatively seldom involve
replacing existing programs.  They're entirely new files, often in
unusual places.  You're unlikely to have something to compare checksums
against.  Then what?

- A lot of compromises involve changed config files.  What does a
checksum comparison of a config file to other machines mean?  Nothing.
You can tripwire them, but in busy multi-hosting environments, you'll
get flooded with false positives.

- A large class of compromises are based around programs you _can't_
find on disk.  Each spamrun begins with: download program, start
program, program removes its own files, start spamming.  There's nothing
to checksum for more than a few seconds.

- Many hosting environments can have multiple versions of the same code
(especially stuff like Wordpress or Joomla) operating simultaneously.
How does the code know what to compare checksums with?

Such techniques sound promising, but once you get into trying to run one
of them in a large enough scale to do something useful, you find out
it's a lot harder than it looks, and not nearly as effective as you'd like.

I run rkhunter.  I run Rkhunter to see if I can tell people to use it to
find compromises.  I keep throwing infections on the machine (but don't
start them).  It hasn't found any of them...  Sigh.

[rkhunter has an explicit "rootkit finder" module in addition to its
tripwire capability.  I don't know how the RK finder works (they don't
say ;-) - I'm sure I could find out but...), but, it's not finding the
darkmailer and r57shell and ... tidbits I'm laying around as bait...  So...]

Another security procedure, extending from that one, could be to remove
the disks, the hard drives, from computers, periodically, and to scan
the file systems and other disk sectors, using other computing devices,
to obtain the hashes of each file and to then utilize some resource,
e.g. a P2P DHT, to compare the hashes of those files to the hashes of
the files on other computers.

That'd go over really well in large-scale production multi-hosting
environments.... ;-)

You don't have to go that far.  Boot from CD.  Or see how tripwire gets
around this.

_______________________________________________
Asrg mailing list
Asrg(_at_)irtf(_dot_)org
http://www.irtf.org/mailman/listinfo/asrg