On Monday 24 November 2003 12:53 am, Philip Gladstone wrote:
My concern about using any TCP based protocol is that the latencies and
setup costs will be significant when delivering email. The good news
about the current SPF (non-http) is that all the data is availabe via
UDP and it already makes use of existing caching infrastructure in the
'net.
This strikes me as the kind of issue that cannot be resolved without concrete
comparative performance data. Just what constitutes 'too costly' is a matter
of where you set the bar.
The DNS system has certainly proven itself to scale well, true. On the other
hand, most internet protocols work quite well without demanding specific help
from the DNS. For example the web has no protocol-specific DNS records to
support it. The Internet protocol has two (A and WKS), and SMTP is the _only_
application-level protocol that has it's own RR type.
If I were reviewing a proposal for a new RR type to fix a defect in the SMTP
protocol (actually I guess we are all doing just that ;) my first thought
would be "you've already got one, which is more than most, so why do you need
more?". Is it really appropriate to ask DNS to rescue a flawed protocol
instead of fixing that protocol with it's own constraints?
I suppose that is an issue of principle that can hardly stand against the
weight of the spam problem. More practically there is the issue of TXT
records:
I would be really surprised if an RFC were to approve hijacking TXT records,
it's just not going to pass muster. A real RR will have to be designated. It
will take a _long_ time for a new RR to be rolled out worldwide. A non-RFC
using TXT records will not get widespread deployment.
An ESMTP extension would take about as long to roll out as a new RR type,
though I expect the approval process would be much faster.
I'm thinking that a system that is able to work effectively even with minority
participation has a much easier adoption curve, and that involves some kind
of fallback system to assess non-adopting domains, as has been discussed with
SPF in terms of a fallback domain.
I guess I just don't like the idea of mandating a centralized fallback domain,
and would prefer to see a more distributed system do the job where proper
authority is "in absentia". Hence a web-of-trust approach seems appropriate.
If you can think of a way to implement such a web-of-trust within the
constraints of the DNS system, go right ahead.
For example the 'http' mechanism I have previously proposed to allow
webmasters to publish SPF data could be accessed via a 'gateway' in the form
of an SMTP-XQSA node that does such a lookup and returns advisory XQSA
responses. In order to use it, an MTA would not need to integrate any HTTP
lookup mechanisms, nor even have any code to support the specific mechanism.
The administrator of an SMTP-XQSA enabled MTA would simply add the gateway's
hostname to its list of trusted 'advisor' peers.
Of course you could do the same thing with DNS by writing a custom DNS
server... but that's what it would take.
I can imagine a gateway that uses SPF-DNS queries to formulate high-confidence
XQSA advisories, hence why I consider it complimentary rather than
competitive. You could even model an MTA's interface to an SPF plugin as a
pipe using XQSA as the protocol.
It would be easy enough to draft a lightweight UDP response mechanism for
SMTP-XQSA queries, but it breaks the 'simple' in SMTP which is really one of
it's principal virtues. Also SMTP casts a wider net than just IP, so UDP
would not always be available.
- Dan
-------
Sender Permitted From: http://spf.pobox.com/
Archives at http://archives.listbox.com/spf-discuss/current/
Latest draft at http://spf.pobox.com/draft-mengwong-spf-02.6.txt
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to
http://v2.listbox.com/member/?listname(_at_)©#«Mo\¯HÝÜîU;±¤Ö¤Íµø?¡