Re: [Asrg] Soundness of silence
2009-06-17 06:14:38
Bill Cole wrote:
I don't see why [DNS] techniques are not amenable to standardization.
Actually, there is a couple of DNSBL drafts that are slowly moving
forward.
Which are good efforts, but they don't actually tell readers which
DNSBL's are highly effective and which are dangerous to their mail. Or
which might be both. For the overwhelming majority of mail systems, the
most effective, cost-effective, and safe tool to shun spam is the
Spamhaus Zen list, but it would be a very bad idea for any RFC to say
that. Similarly, there are very safe, cheap, and effective ways to stop
spam before DATA based on rDNS and HELO names that could never pass
muster for an RFC.
I agree an RFC should not mention an existing organization, unless it
has an official role (e.g. IANA.) Spamhaus or Google may be considered
institutions, but have no official role. However, an RFC can describe
a technique and generically refer to "existing organizations" that
offer a well defined service.
As for feedback and name spreading, it should be noted that DNSBLs
suffer of all the logical traps implicit in negative
characterizations. A site may learn their IP(s) are blacklisted and
seek support on how to be removed. Compare that with whitelisting,
where a site may learn their mail is not whitelisted because they are
not vouched. Obviously, the kind of support, choice, and any involved
payment, have very different flavors.
It should be possible for my SMTP server to accept mail only from, say,
an office opposite with whom I do most business, and shunning all the
rest except, say, Gmail, thereby relying on their filtering. There's
nothing wrong with that, except for technical problems that make it
difficult to set it up properly.
No RFC will (or should) ever recommend such an approach.
Why not? It is not much different from setting Gmail as an MX, except
that senders have to use a submission agent to relay that way.
Au contraire, I don't think the RFCs should describe a protocol whose
effectiveness depends on fuzzy filtering, unless they also describe
the latter.
That is not because such an approach will never be the best one for any
system, but because it is not a widely deployable solution and it relies
upon a characteristic of the mail world that may well be transient.
The world itself is transient. And, IMHO, giant ESPs are here to stay.
I think you are misunderstanding my point. The existing tools are good
enough that most mail system operators can put together some set of them
to assure that a large majority of their users see spam rarely and have
very little legitimate mail blocked, while the non-zero level of errors
in both directions have made users more acclimated to and forgiving of
such imperfections.
I see. However, I don't want to be unable to explain to a user the
nature of such imperfections. In addition, that is complicate and
indeterministic enough to make setting up a new MTA server a daunting
task.
This has raised the bar significantly for new
technical approaches, which will not even get attention unless they are
very good, very low-cost, and very easy to deploy.
Fine.
[...]
Replace the tutorial on mail filtering fundamentals with a concise
problem definition and concise explanation of how VHLO provides a solution.
Good point, thanks. Actually, I have expanded that first subsection in
order to make it clear the point on content filtering. I'll look at
how to reduce it.
It obviously implies that email is going to die out.
Not at all. I just don't expect that it will every be like 1993 again.
That's how it should be.
I think we've reached something like a dynamic equilibrium over the past
few years, and it will take a really big push to change that. There are
many mail systems out there shunning 97%+ of all messages while
delivering less than a spam per week per user and stopping less than one
legitimate message per year per user. 5 years ago, that sort of accuracy
took an anti-spam craftsman tending a garden of homegrown tools (and
customizations of open tools) with users screaming bloody murder over
every error. Today you can buy it in a box or as a service, and the
users are largely resigned to the fact that sometimes mail goes missing
and sometimes they get solicited for dubious drugs and money-making
schemes. Perversely, users have also become shockingly dependent on
Internet email, and expect it to do things that they never would have
asked back before mail administrators evolved into a breed of artful
destroyers of most mail.
There have also been some legal actions in the last 5 years, that
possibly helped.
Notwithstanding your figures, I don't want to install a fuzzy mail
destroyer on my server, as I don't know how it operates. For example,
how can it discern a photo that I forward via email from a cell phone
from an image promoting cheap meds? My concept of reliability is
different.
Missing one legitimate message per user per year is much more
transient than the ESP market, and would not explain why spam is
perceived to be a problem at all. Is it me? I don't think that a
system that forces operators to apply that sort of mechanisms can be
considered good. If it is, I think an RFC should say that.
Finally, I see no relation between features that users ask and spam
filtering. Rather, realizing the unreliability of what they've become
dependent on should push them toward alternative solutions.
_______________________________________________
Asrg mailing list
Asrg(_at_)irtf(_dot_)org
http://www.irtf.org/mailman/listinfo/asrg
|
|