ietf-dkim
[Top] [All Lists]

Re: [ietf-dkim] SSP draft suggestions

2007-07-26 10:49:44

On Jul 25, 2007, at 7:45 PM, Jim Fenton wrote:

Arvel,
14) Algorithm 2 - "with one or more answer which is a syntactically-valid SSP response" -> "with one or more syntactically valid SSP responses"

Right. This points out that we need to think about what happens if we get more than one SSP response. Any opinions?

If SSP is ever defined for more than just TXT records, then multiple RRs containing multiple syntactically valid responses would be possible. Assuming that rejection should occur when these records differ begs the question, "How will the SSP format evolve?"


15) Algorithm 3 has me a little worried. It would prevent the use of domains unless they explicitly exist in DNS. So, if I wanted to send a message out from message(_at_)sms(_dot_)altn(_dot_)com I would have to make sure and create a DNS A record or something for sms.altn.com first right? (sorry, my knowledge of DNS is not so good)

You should probably have an MX record anyway for sms.altn.com, especially if that's also the envelope-from address (so it can receive bounces). It doesn't need to be an A record; any record type will do.

If an SMTP domain is valid, one must assume that this domain can receive messages. There is problem when one assumes that just any record provides evidence of an SMTP's domains existence. To send to this domain, this requires either an _MX_ or _A_ record be published at the SMTP domain.

Basing existence upon A records may run afoul of those ISPs and TLDs who inject records when NXDOMAIN would otherwise be returned. Considering the SMTP domain as being valid only when there is also an MX RR has the advantage of actually ensuring a valid SMTP domain. Use of A records for SMTP discovery should be deprecated. When some domain wishes to use random sub-domains, they must also publish a wildcard MX. A query at the random sub-domain will thereby return an MX record. Use of truly random domains precludes the ability to publish policy at a random domain, as it should be.

17) Algorithm 4 - "is a top-level domain" how can that be determined in practice? I don't think it can can it? If not, we're giving algorithmic instruction here that is impossible to implement.

A top-level domain is one that has exactly one component, e.g., "com", "org", "uk", or "tv". We also talk about suffixes, which would probably include "co.uk", "k12.ca.us", and "edu.au". We mandate not querying the top-level domains, since they can be algorithmically determined and we really don't want to unnecessarily load the TLD servers. Not querying suffixes is optional, as the definition for what a "suffix" is, because there is no formal definition and this is really an optimization.

For the ccTLDs, it is common for second and third level domains to be used. Not all domains are published off of a top level domain. Second level domains might then become tempted to publish some SSP record to terminate an inordinate level of SSP searches. Nearly all email sent today will cause an unterminated search SSP record search by those implementing this flawed algorithm. : ( If SSP supported NO MAIL, it would be rather ironic to have SLDs publish an SSP that accurately indicated they do not send email. Use of parent domains to establish policy if _flawed_, and should not be used.

18) Algorithm 5 - unless we can figure out how to stop queries at top-level domains, Algorithm 5 will send lots of queries to the root servers right (.co.uk for example)?

.co.uk isn't a root server, it's a suffix.

Also known as a Second Level Domain, or SLD. These are used in a manner identical to that of a TLD. Just as with the TLD, SLDs MUST NOT be subjected to the resulting high level of unterminated searches.

The root servers are the places you query to find the name servers for ".com", ".us", ".hr", and so forth. It may send quite a number of queries to the .co.uk name servers, which is why you may want to have it in your suffix list, but even if it's not, the number of queries should be limited by the minimum TTL of the zone it's in (the negative caching time).

The algorithm being suggested demands a list of domains used as registries be published prior to deployment of such an algorithm. I will to help in creating the draft that documents this list. Using MX records to valid existence and deprecating the use of A records for discovery is a better overall strategy however. It would not take long for all domains to publish MX records when their messages are not accepted on occasion as a result.

The problem of spam and spoofing is such, that some change to SMTP can and should be demanded. Publishing an MX record is not much to ask.

-Doug



_______________________________________________
NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html

<Prev in Thread] Current Thread [Next in Thread>