ietf-asrg
[Top] [All Lists]

RE: [Asrg] 2. Problem Characterization - Defining spam within consent paradigm

2003-07-03 15:12:34
-----Original Message-----
From: Tom Thomson [mailto:tthomson(_at_)neosinteractive(_dot_)com] 
Sent: Thursday, July 03, 2003 4:08 PM
To: Madscientist; asrg(_at_)ietf(_dot_)org
This is very like a closed use (sender) group where new 
members can be introduced by any existing member. It is 
clearly very effective when a CU(S)G is what you want - not 
quite as effective as insisting all incoming email is 
encrypted with a key you provide (since sniffing the messages 
you receive will eventually provide enough data to deduce the 
consent string) but a lot simpler than that and likely to have 
far fewer deployment difficulties.

Unfortunately, if a closed user group is not what you want, 
it's not very helpful because you can never receive mail from 
a stranger.

The big problem with the consent paradigm is how to define 
consent in the non-closed case, and what mechanisms can then 
be used to express it and to police it:  how do I express 
which mail arriving from a stranger has my consent and which has not?

It is definitely a closed use mechanism.

However, with a few tricks, it can also be extended to provide some
solutions for the anonymous case.

One way this can happen is based on policy exchange. This is a mechanism
by which you may accept certain policy decisions from certain members of
your COT. This provides a generalized mechanism for members to introduce
classes of anonymous users. For example, user a has a policy to accept
messages sourced from sources X, Y, and Z where the sources might be
defined as MTAs, or some combined metric that is easily identifiable
based on "normal" network properties or message headers.

User a and user b are members of a COT and have a reciprocal agreement
to accept "white-list" policies. An anonymous user from system X sends a
message to user b (who does not have a policy for system X at this
time). User b requests a validation of the new sender from user a (or
some other peer it the COT). User a (more specifically their agent)
returns a rating of the sending system X indicating that X should be
white listed. User b (the agent) accepts the policy and makes it local,
then accepts the message.

In a closed group system, some set of policies defining other mechanisms
will have to be in place to allow for the anonymous case. With the above
extension the work load of establishing and maintaining policies can be
dramatically diminished because that work is distributed across the
entire user base.

In this way the COT mechanism at work in the closed group system can
naturally extend to leverage policy decisions on a wide range of
systems. Work that we've been doing with Message Sniffer suggests
strongly that the majority of policy decisions are identical across wide
groups of users. This would allow for a collective black list and white
list to stabilize and act as a basis for a common policy.

Groups or users with different policies can map local exceptions to the
common group to manage the next largest group of policy decisions. The
smallest group of policy decisions which are entirely unique for small
groups or individual users can then be managed very closely to the MUA
or at the MDA level.

In practice the anonymous policies for a particular user would be
aggregated by the provider into a common policy for a group of users
based on the similarities of their local policies. Differences between
individual users and their assigned policy group would be mapped as
exceptions.

In the same way, super groups of policies might be established out of
smaller groups, and ultimately a master policy for the provider would be
aggregated from those. The number of levels in this network of policy
groups would be determined by the resources assigned by the provider. 

It is also important to note that the organization of the policy
aggregation system can be entirely separate from the organization of the
network of closed groups. This allows providers to implement any scale
of policy aggregation from none, to a broad collaborative network, and
allows the deployment of this "service" to be driven by market forces.

For the ISP, providing this service is a value added benefit and so is
likely to gain market share for them as their users will experience
better protection from abuse with less work. In addition, by applying
aggregated white/black list policies at their border MTAs they can save
significant resources. In very large providers where there are likely to
be a broad number of policies, intermediate MTAs or MDAs could process
individual policies for specific groups of users who's local policies
are close to those groups. This further reduces the service costs of the
provider since the message traffic naturally flows toward systems
serving users who wish to receive the traffic and is naturally blocked
from the others (reducing infrastructure costs).

A sufficiently advanced system might also implement a mechanism to
internally migrate a user's connection point to the appropriate MDA(s)
based on their policy proximity.

A further refinement (which we recommend) is for providers (or segments
within large providers or organizations) to establish COT groups to
share policies and abuse statistics (based on attempted violations to
those policies). With this mechanism, ISPs with similar policies could
divide the work of detecting abuse, and aggregate the work of rejecting
abuse. This moves quickly beyond spam to wit: ISP c and ISP d have an
established COT between them and have local policies which accept threat
rejection policies from eachother. ISP c begins to see worm delivery
attempts from external network q and establishes a policy to block that
network at the gateway routers until the abuse subsides. Since this is a
threat detection ISP c broadcast the new policy to the members of it's
COT and those with accpetance policies implement the same policy,
effectively disconnecting network q from their segment of the Internet.
ISP d is protected from the threat even before they have seen the first
arrival of the malware as are all other ISPs in that COT with similar
policies.

COT policies based on this model can be managed automatically and
securely through fairly simple rules governing cellular automata, with
local automata controlled by local hard policies and driven by the
effects of aggregated policies which in turn are driven by end user
policies.

---

Attacks to this system are rejected by collaborative detection and
collective action. If a member of a COT abuses the policies of a peer
then that peer will "distrust" the member and reject them as a matter of
local policy. Other members of the COT _may_ decide to automatically
reject the abuser, or may trigger that decision based on some threshold
of reports from peers in the COT... but in the end, the abuser is
rejected by the COT so that abuse is rejected. This prevents any
attempts to poison the groups policies since membership is driven by
consensus and radical differences from that consensus mean rejection
from the group. Ultimately the decision to accept or reject a member
from a COT is determined by the local policies of the members and the
proximity of the individual's policies to those of the other members in
the group.

The dynamic effects of an open COT model like this are that member
systems will naturally aggregate in COTs that have like policies, and
that changes in local policies will force changes in memebership thus
reducing or eliminating the opportunity for abuse, and consistently
driving toward an organization where the broadest leverage of common
policies can be achieved.

Since the model has no central governing agency the deployment of the
model can be driven by it's effectiveness alone, and the potential for
abuse by the agency, or abuse of the agency as an attack on the system
is eliminated.

Since the system provides a significant net savings in bandwidth and
services costs there should be no need to address economic models or
increase the costs of access to the network. We can continue to have a
Internet where the cost of access maintains parity with the cost of the
infrastructure so that it can remain widely available and "almost free".

Once wide deployment of this system is adopted the concept of spam would
likely be eliminated since there would be very few systems available to
that form of abuse... The amount of spam that did continue would be
isolated and minimized so that it's effects would not be generally
important.

Intermediate levels of deployment provide strongly accellerated
performance as participation levels increase.

There is also a strong benefit to this model in that the end user is
ultimately in control of the content they receive. To the extent allowed
by the provider's local policies, each user could receive their own mix
of anonymous messages based on their own policies and the mechanisms
available to them. The architecture itself does not establish any
limitations beyond those imposed by the available participants and
service capabilities.

There is also no strict definition of the mechanisms driving these
policies. This means that the system can adapt as new mechanisms are
devised, and that it can be implemented with the tools that are
currently available.

A typical policy using current technologies would contain a combination
of black list entries, white list entries, and heuristics which make use
of other mechanisms such as virus detection engines, RBLs, content
filters, abuse statistics as posted by peers in the COT, and adoption
policies regarding peer policy recommendations.

I realize that now I've essentially written a paper on this (with some
missing details). Sorry for the length.

_M



_______________________________________________
Asrg mailing list
Asrg(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/asrg