ietf-mxcomp
[Top] [All Lists]

Re: So here it is one year later...

2005-01-28 00:38:48

On Fri, 2005-01-28 at 04:20 +0100, Frank Ellermann wrote:
Douglas Otis wrote:

applying a record against different algorithms than that
intended when published is inherently deleterious

Indeed.

Once again the algorithm changes and still this draft uses
the same labels and record identifiers?  Classic.

This "new" draft is technically nearer to the last pre-MARID
"old" draft than draft-lentczner-spf-00 was.  The latter was a
rather quick hack salvaging all syntax improvements found here
(= mxcomp) after MARID was killed and the old draft expired.

This draft is attempting to make significant algorithmic changes to the
initial draft, as well as how these records are used.

Some examples:

   SPF clients MAY check the "HELO" identity by calling the check_host()
   function (Section 4) with the "HELO" identity as the <sender>.  If
   the HELO test returns a "fail", the overall result for the SMTP
   session is "fail", and there is no need to test the "MAIL FROM"
   identity.

A test against HELO that goes from unknown, to not tested, to fail?
This a change in algorithm, as is the hunt for alternative records,
where a pass may not be based upon address compliance.  One wonders how
macros are applied when the same check_host routine is recycled. 

   An SPF record published at the zone cut for the domain will be used
   as a default for all subdomains within the zone (See Section 4.5.)
   Domain owners SHOULD publish SPF records for hosts used for the HELO
   and MAIL FROM identities instead of using the zone cut default
   because the fallback requires additional DNS lookups.  The zone cut
   default does reduce the need to publish SPF records for non-email
   related hosts, such as www.example.com.

Again, another change in algorithm.  This also means TXT RR placed at
the zone apex may now be problematic as how it becomes applied and to
which identity.  

   If no matching records are returned for the <domain;>, the SPF client
   MUST find the Zone Cut as defined in [RFC2181] section 6 and repeat
   the above steps.  The <domain>'s zone origin is then searched for SPF
   records.  If an SPF record is found at the zone origin, the <domain>
   is set to the zone origin as if a "redirect" modifier was executed.

   If no matching records are returned for either search, an SPF client
   MUST assume that the domain makes no SPF declarations.  SPF
   processing MUST abort and return "None".

Yet again another change in algorithm isn't it?

   This mechanism matches if <ip> is one of the MX hosts for a domain
   name.

   MX               = "mx"     [ ":" domain-spec ] [ dual-cidr-length ]

   check_host() first performs an MX lookup on the <target-name>.  Then
   it performs an address lookup on each MX name returned.  The <ip> is
   compared to each returned IP address.  To prevent DoS attacks, a
   limit of 10 MX names MUST be enforced (see Section 10).  If any
   address matches, the mechanism matches.

A limit change is an algorithmic change that could not be possibly
foreseen by earlier publishers.  Who is responsible when their mail goes
missing?

   Note regarding implicit MXes: If the <target-name> has no MX records,
   check_host() MUST NOT pretend the target is its single MX, and MUST
   NOT default to an A lookup on the <target-name> directly.  This
   behavior breaks with the legacy "implicit MX" rule.  See [RFC2821]
   Section 5.  If such behavior is desired, the publisher should specify
   an "a" directive.

Would this be yet another algorithm change?

  In pseudocode:

   sending-domain_names := ptr_lookup(sending-host_IP);
   if more than 10 sending-domain_names are found, use at most 10.
   for each name in (sending-domain_names) {
     IP_addresses := a_lookup(name);
     if the sending-domain_IP is one of the IP_addresses {
       validated-sending-domain_names += name;
     }
   }

Yet another change to the algorithm that introduces a new limit.

   Pseudocode:

   for each name in (validated-sending-domain_names) {
     if name ends in <domain-spec>, return match.
     if name is <domain-spec>, return match.
   }
   return no-match.

   This mechanism matches if the <target-name> is either an ancestor of
   a validated domain name, or if the <target-name> and a validated
   domain name are the same.  For example: "mail.example.com" is within
   the domain "example.com", but "mail.bad-example.com" is not.

   Note: Use of this mechanism is discouraged because it is slow, is not
   as reliable as other mechanisms in cases of DNS errors and it places
   a large burden on the arpa name servers.  If used, proper PTR records
   must be in place for the domain's hosts and the "ptr" mechanism
   should be one of the last mechanisms checked.

One wonders how this mechanism is to be discouraged?
 
   SPF implementations MUST limit the number of mechanism that do DNS
   lookups to at most 10, if this number is exceeded, a PermError MUST
   be returned.  The mechanisms that count against this limit are
   "include", "a", "mx", "ptr", "exists" and the "redirect" modifier.
   The "all", "ip4" and "ip6" mechanisms do not require DNS lookups and
   therefore do not count against this limit.  The "exp" modifier
   requires a DNS lookup, but it is not counted as it is used only in
   the case of errors.

   When evaluating the "mx" and "ptr" mechanisms, or the %{p} macro,
   there MUST be a limit of no more than 10 MX or PTR RRs looked up and
   checked.

I would also describe this as an algorithmic change that could not have
been anticipated by the publishers.  And the following regarding
forwarding:

   There are several possible ways that this authorization failure can
   be ameliorated.  If the owner of the external mailbox wishes to trust
   the forwarding service, they can direct the external mailbox's MTA to
   skip such tests when the client host belongs to the forwarding
   service.  Tests against some other identity may also be used to
   override the test against the "MAIL FROM" identity.

   For larger domains, it may not be possible to have a complete or
   accurate list of forwarding services used by the owners of the
   domain's mailboxes.  In such cases, white lists of generally
   recognized forwarding services could be employed.

Some other Identity?  Generally recognized forwarding services?  Would
this open the door for abusing the typical alma mater?  This is a mess.

This draft at long last recognizes wildcard labels are a problem.  Why
not also recognize a need to _change_ revisions when algorithms change
and the need for a standardized prefix rather than a record tag?  These
are rather significant changes being made, why suggest otherwise?

-Doug