ietf-mxcomp
[Top] [All Lists]

Re: So here it is one year later...

2005-01-28 14:29:03

On Fri, 2005-01-28 at 13:20 -0600, wayne wrote:
On Fri, 2005-01-28 at 04:20 +0100, Frank Ellermann wrote:
Douglas Otis wrote:

applying a record against different algorithms than that
intended when published is inherently deleterious

Indeed.

Once again the algorithm changes and still this draft uses
the same labels and record identifiers?  Classic.

This "new" draft is technically nearer to the last pre-MARID
"old" draft than draft-lentczner-spf-00 was.  The latter was a
rather quick hack salvaging all syntax improvements found here
(= mxcomp) after MARID was killed and the old draft expired.

This draft is attempting to make significant algorithmic changes to the
initial draft, as well as how these records are used.

Some examples:

   SPF clients MAY check the "HELO" identity by calling the check_host()
   function (Section 4) with the "HELO" identity as the <sender>.  If
   the HELO test returns a "fail", the overall result for the SMTP
   session is "fail", and there is no need to test the "MAIL FROM"
   identity.

A test against HELO that goes from unknown, to not tested, to fail?
This a change in algorithm, as is the hunt for alternative records,
where a pass may not be based upon address compliance.  One wonders how
macros are applied when the same check_host routine is recycled. 

This is not an example of changing semantics and/or algorithms.  As
pointed out by Frank, this is consistent with the pre-MARID
specification of SPF.  The problem is not that draft-schlitt-spf-00
changed things back, but that the IETF, via the MARID WG, allowed it
to change in the first place.

Here is a quote for the pre-MARID draft regarding the specified
processing algorithm limits.

6.2 Processing Limits
   
   During processing, an SPF client may perform additional SPF
   subqueries due to the Include mechanism and the Redirect modifier.

   SPF clients must be prepared to handle records that are set up
   incorrectly or maliciously.  SPF clients MUST perform loop detection,
   limit SPF recursion, or both.  If an SPF client chooses to limit
   recursion depth, then at least a total of 20 redirects and includes
   SHOULD be supported.  (This number should be enough for even the most
   complicated configurations.)

   If a loop is detected, or if more than 20 subqueries are triggered,
   an SPF client MAY abort the lookup and return the result "unknown".

   Regular non-recursive lookups due to mechanisms like "a" and "mx" or
   due to modifiers like "exp" do not count toward this total

This new draft is NOT a compatible change.  It should also be noted this
pre-MARID process could entail orders of magnitude more lookups.  

   An SPF record published at the zone cut for the domain will be used
   as a default for all subdomains within the zone (See Section 4.5.)
   Domain owners SHOULD publish SPF records for hosts used for the HELO
   and MAIL FROM identities instead of using the zone cut default
   because the fallback requires additional DNS lookups.  The zone cut
   default does reduce the need to publish SPF records for non-email
   related hosts, such as www.example.com.

Again, another change in algorithm.  This also means TXT RR placed at
the zone apex may now be problematic as how it becomes applied and to
which identity.  

This is not an example of changing semantics and/or algorithms.  As
pointed out by Frank, this is consistent with the pre-MARID
specification of SPF.  The problem is not that draft-schlitt-spf-00
changed things back, but that the IETF, via the MARID WG, allowed it
to change in the first place.

You should review which records would be used to evaluate the EHLO/HELO.
And yet another quote from Pre-MARID.

     The <responsible-sender> comes from the domain name of the "MAIL
     FROM" envelope sender.  When the envelope sender has no domain, a
     client MUST use the HELO domain instead.  If the HELO argument does
     not provide an FQDN, SPF processing terminates with "unknown".

Now the record applied against the EHLO/HELO identity could be found at
the zone apex AND at the EHLO/HELO.  The results of this check have also
changed and is not consistent.   

   If no matching records are returned for the <domain;>, the SPF client
   MUST find the Zone Cut as defined in [RFC2181] section 6 and repeat
   the above steps.  The <domain>'s zone origin is then searched for SPF
   records.  If an SPF record is found at the zone origin, the <domain>
   is set to the zone origin as if a "redirect" modifier was executed.

   If no matching records are returned for either search, an SPF client
   MUST assume that the domain makes no SPF declarations.  SPF
   processing MUST abort and return "None".

Yet again another change in algorithm isn't it?

This is not an example of changing semantics and/or algorithms.  As
pointed out by Frank, this is consistent with the pre-MARID
specification of SPF.  The problem is not that draft-schlitt-spf-00
changed things back, but that the IETF, via the MARID WG, allowed it
to change in the first place.

This changes the pre-MARID, mid-MARID, and post-MARID algorithms!  The
publisher can NOT be sure which record will be applied against their
HELO.  It goes from virtually a don't care to a failure.  The fall-back,
as it is called, will also likely conflict with records at the zone
apex.  Complain about mail lost as a result of the algorithms applied by
Sender-ID, but show some compunction regarding the effect of these
changes.  All this could be avoided by changing the record identifier.

   This mechanism matches if <ip> is one of the MX hosts for a domain
   name.

   MX               = "mx"     [ ":" domain-spec ] [ dual-cidr-length ]

   check_host() first performs an MX lookup on the <target-name>.  Then
   it performs an address lookup on each MX name returned.  The <ip> is
   compared to each returned IP address.  To prevent DoS attacks, a
   limit of 10 MX names MUST be enforced (see Section 10).  If any
   address matches, the mechanism matches.

A limit change is an algorithmic change that could not be possibly
foreseen by earlier publishers.  Who is responsible when their mail goes
missing?

While this is, indeed, a change from the last pre-MARID SPF spec, this
limit has been in place in my libspf2 implemenation since long before
MARID existed, and hence is an existing practice.  As discussed *many*
times on SPF-discuss, I can not find *any* legitimate emailers who
have come close to this limit.  This change only effects abusers and
misconfigured systems.

I guess those that are affected are therefore illegitimate?  To publish,
discover the limits in the code being used?  Code that did not comply to
some or any draft?  When are the next changes to the algorithm going to
be made?  How can a disruption be avoided when there is NO MEANS to
introduce a new version without removal of the prior?

And the following regarding forwarding:

   There are several possible ways that this authorization failure can
   be ameliorated.  If the owner of the external mailbox wishes to trust
   the forwarding service, they can direct the external mailbox's MTA to
   skip such tests when the client host belongs to the forwarding
   service.  Tests against some other identity may also be used to
   override the test against the "MAIL FROM" identity.

   For larger domains, it may not be possible to have a complete or
   accurate list of forwarding services used by the owners of the
   domain's mailboxes.  In such cases, white lists of generally
   recognized forwarding services could be employed.

Some other Identity?  Generally recognized forwarding services?  Would
this open the door for abusing the typical alma mater?  This is a mess.

As noted in Section 9, this is is non-normative.  If you don't agree
with what is said, you are free to do anything you want.  This is just
for your information.

In essence, don't expect forwarding to function.  May I comment that
this is disruptive and a mess without this comment being called
misinformation or myself as being a troll for taking time to read the
drafts presented on this reflector.

Regarding the debate about who uses SPF, spammers register more domains
than legitimate users, but at any instant, individual spammer domains
will not likely be used once consistently blocked by filters.  Looking
at the numbers from the perspective of nominal traffic, versus published
domains with a "bad" record, these ratios _should_ be different.

This draft at long last recognizes wildcard labels are a problem.  Why
not also recognize a need to _change_ revisions when algorithms change
and the need for a standardized prefix rather than a record tag?  These
are rather significant changes being made, why suggest otherwise?

The section on wildcards was added as part of the MARID process and
remained in the draft-schlitt-spf-00 version because it is useful.
The changes to the algorithms that were added as part of the MARID
process have been removed.  Changes to the algorigtms from the
pre-MARID SPF specs have all been implemented by at least one system
and have been checked, via wide surveys, that they do not conflict
the install base of legitimate SPF records.

I find it very amusing that you are now complaining about the process
limits being a change, since you have long complained about the lack
of process limits in previous versions of the SPF spec.

I have not complained about a reduction in the limits, but rather a
change that imperils the publishers.  I was complaining this draft still
does not allow a reasonable method to change processing algorithms
without potentially creating sizeable disruption.  This is due, in no
small part, from a false expectation that a wildcard label provided some
utility and is the reason for usurping the use of the TXT RR.  This was
wrong and remains wrong.  Using a prefix on the record remains a viable
method to ensure SPF offers less disruption to users as well as other
protocols.  If I was right once...

-Doug