ietf
[Top] [All Lists]

Re: proposal for built-in spam burden & email privacy protection

2004-02-13 16:09:59
On Thu, 12 Feb 2004, Ed Gerck wrote:

You can't make it more expensive without shooting yourself in the foot.
In information theory-speak, you can't prevent a covert channel** unless
you have no channel at all.  

By the addition of a correction channel (Shannon's 10th theorem),
a covert channel can be detected with a probability as close to 100%
as I wish.

Err, I think that allows you to correct _errors_ in transmission. It does
not enable detection or prevention of a covert or sneaky channel. While
there have been examples of sneaky channels that used intentional errors
in a channel as the sneaky channel, error correction does not prevent or
detect the covert channel, unless of course, the errors are corrected
before they can be seen by the user of the sneaky channel. Otherwise, you
do not know if the errors are intentional and carry information, or
whether they are just errors. 

What's that?  You say that one could study the errors, and if the bits are
not random, perhaps you have detected a covert channel.  A good encryption
mechanism will assure a random distribution of bits, and so you would be
unable to distinguish from just random errors.  There are other means of
creating covert channels besides introducing errors.

Covert-channel detection is a whack-a-mole game. 

Not really. It can be modeled, it can be improved.

I agree that it can be modeled, and that we can learn things by the
effort. I've seen some interesting reading on modeling the bandwidth
potentially available to a covert channel.  This doesn't prevent the
existance of a covert channel, nor does it change the whack-a-mole nature
of the problem.  You still have to detect and respond.

But knowing the potential bandwidth in some situations might help you to
harrass the bad guys. Or focus your detection efforts, since the bandwidth
available may put constraints on the bad guys, and knowing those
constraints may yeild some insights.  In this case, I think the bandwidth 
available to a large number of virus-infected computers is quite 
substantial, as is the compute power.

Putting it in different terms, how can the government make sure those
"government use only" stamped envelopes are only used for government
business? 

Easy. By applying Shannon's 10th theorem. Sample enough mail at
distribution centers (going back to the source, which is possible
even without a legal mandate to open the  envelopes) and bar the 
culprits from sending govt. mail until the probability that any 
mail is incorrectly using govt. envelopes is a close to zero as desired.

Unfortunately, you described a detection mechanism:  Whack-a-mole.  
Indeed, I think this is exactly what the government does to detect abuse.  
And it is basically the same thing we do now for email by applying text
analysis.

But we are looking for (and you promised) a mechanism which makes it
impossible for them send it in the first place:  No more whack-a-mole.

Clearly, in the sampling example, they can use invisible ink to fool the
censors, or write their messages in an ordinary looking code that looks
like official business (steganography).  The error correction theroem
doesn't help.  This example isn't nearly as hypothetical as it sounds. The
US [and other governments] really used to open international mail to look
for secret messages. We used to also test letters for the presence of a
number of invisible inks. The Germans invented an invisible ink that was
inpervious to testing for a long time.  The US censors would even re-write
personal letters using slightly different words to preclude the use of
special code words.  Then came micro-dots and so forth.  Each channel
detected led to the creation of new channels (either different people,
same method, or new methods) within the postal mail system.  But it did
not lead to any situation in which sneaky channels were impossible.

There is no scheme in which the rules can't be broken by someone intent on
breaking them. 

This may sound good but is incorrect. Systems can be designed
such that a set of properties remain effective if most or even 
all parts of the system fail (for whatever reason, including
attacks).

Fault tolerance doesn't seem to be helpful.  To design a system that can't
send spam, you have to first identify the properties of spam in such a way
that a person dedicated to breaking the rules would be prevented from
sending spam.  Information theory tells us that such a goal is impossible
to obtain when it tells us that a covert channel can't be proven not to
exist.

The only path is to detect them, and prosecute them.  

There is no world law, no unified way to prosecute. Even
venue is hard to guarantee (allowing you to  prosecute
the culprit).

This isn't quite true.

In the case of spam, detection is easy, but not automatic.  
Prosecution is now possible.  Its still a whack-a-mole game. It won't
end unless you can get past the virus infection to the virus operator,
and hopefully, there aren't really too many virus operators.  Of
course, we aren't stopping spam either in a very real sense, but
rather abusers who are annoying and mailbombing people.  But by my
count of my inbox, if you stop those people, I can certainly handle
the rest which amounts to maybe 1% of my current junk mail.

When you outlaw spam, only the outlaws spam. So what? The
problem still remains, even if you call them outlaws. 

Actually, genuine spam is not outlawed. Only the spam sent by people who 
are not genuine businesses is outlawed. I expect that this abuse is sent 
by a very small group of people.  Prosecuting this small group should be 
relatively easy.

Also, users should not have to sue spammers, or have any other burden,
in order to protect the users' resources. Imagine if I would have to
manage 300 lawsuits a day (the average spam rate that my system cannot
automatically detect as spam)?

This is an exaggeration. There aren't 300 unique spammers per internet
user per day.

                --Dean




<Prev in Thread] Current Thread [Next in Thread>