Ned Freed wrote:
Sadly, I have to agree with Keith. While these lists are a
fact of life today, and I would favor an informational document
or document that simply describes how they work and the issues
they raise, standardizing them and formally recommending their
use is not desirable at least without some major changes in our
email model and standards for what gets addresses onto --and,
more important, off of-- those lists.
Such a criticism might be a sensible response to a document that defines an
actual whitelisting or blacklisting service. But that's not what this document
does. Rather, this document defines vaarious formats for storing such
information in the DNS and procedures for querying that data. Specification of
actual services based on these formats and procedures is left for later
Standardizing these formats and procedures is important for at least two
reasons: (1) Without a specification nailing down these details there can be
(and in practice have been) interoperability problems between various
implementations and (2) Without something describing how to structure and
operate these systems they can, independent of the actual list content, create
serious operational problems.
I could almost buy this argument. The reasons I don't buy it in this
(1) using the IP source address as an indicator of any kind of identity
has been dubious for a long time (even in an enterprise network, but
especially in the Internet at large), and it is only going to get more
dubious in an era of IPv4/IPv6 transition where IPv4 addresses are
shared between different entities.
(2) use of DNS to communicate this information is a stretch at best.
the protocol is too constrained, and not designed for a use case like
this where the information that needs to be conveyed is very short-lived
(3) the security considerations associated with such use, including
considerations for denial-of-service attack, are quite difficult to address.
so I think there's a compelling case to be made that any kind of DNSBL
is a poor design not worthy of standardization.
Now, I am of course aware of the line of reasoning that says we should not
publish specifications for things that can potentially be abused. I'm sorry,
but if we take a candid look at how this strategy has played out with other
technologies ranging from MIME downgrading to NAT to Bonjour, I think the
record is fairly clear: We're much better off specifying things while calling
out the dangers of their use than we are when we all run off out our
corners and pout about the sad state of the world.
I don't think the examples you cite demonstrate that. Standardizing
NATs in the 1990s would not have helped because the widespread
assumptions at that time were that you couldn't really change NAT
behavior and that the NATs had to operate "transparently" to the
applications. Even today we still don't know how to make a NAT that
works well, without making the endpoints aware of the NAT, and giving
them explicit control over bindings.
As for Bonjour, I at least did try to call out dangers of the use of
IPv4 linklocal addresses, and with overloading DNS names and APIs, and
the WG repeatedly, stubbornly denied that those dangers existed.... and
continued to do so until IESG pushed back on at least some of those points.
But of course the question is not whether something like this can be
abused - anything can be abused. The question is whether something like
this encourages abuse, or if not deliberate abuse, whether it encourages
degraded reliability of the email system. And I think there's a fair
amount of empirical evidence that it does.
That doesn't mean that we shouldn't try to design a better system for
identifying and reporting sources of spam or viruses. But to me it
seems entirely plausible that relying on DNS to transmit this
information is part of the reason that DNSBLs are so often associated
with abuse and denial-of-service attack. If we were designing such a
system from scratch we'd naturally be concerned about things like
accountability, repeatability, polling intervals, and identifying the
precise criteria used to blacklist an address. We'd probably want to
expose more information to the client about why a site was blacklisted,
when it was blacklisted, and so forth, so that the client wouldn't be
bouncing mail without a good reason. But trying to shoehorn this kind
of service into DNS forces most of these considerations to be overlooked
simply because there's not a good way to communicate them in DNS.
Really what this document is trying to do is to standardize a crude hack
- as well as being a blatant attempt at an end-run around our concensus
processes. Publishing it as informational, with appropriate caveats
about the inherent limitations of the approach (including security
considerations) could be beneficial. But I can't see how the protocol
described could ever be acceptable for the standards track.
And I really think the characterization of "pout[ing] about the sad
state of the world" is unhelpful. What we really need to be doing is
figuring out how to retrofit or redesign the mail architecture to allow
senders to be more accountable (with appropriate granularity), and to
shift the cost of vetting mail to the senders. Trying to make tired,
crude, poorly designed hacks work doesn't get us any closer to a viable
solution to the spam problem.
Ietf mailing list