On Tue, 2005-03-22 at 12:46 -0500, Radu Hociung wrote:
I like the idea of weights, but it it is a purely academic exercise,
because at run-time it is difficult or impossible to calculate the real
expensiveness of a record. The checker can try estimating it, but it
will probably not be nearly accurate enough to be useful. This is
because of DNS caching.
I was not suggesting that SPF evaluators determine weights at runtime. I
was suggesting that the weights be fixed, relative to each other, as
part of the spec. The weight of a record would be easily calculable
without needing to actually evaluate it. Resolving MXs to IPs takes X
amount more work than resolving As. Count resolving MXs, how ever
complex they may be (an acceptable average is what would need to be
determined), more than resolving As.
But the actual final load is largely influenced by the number of
queries, so using anything else, which Frank's email got through my
thick skull, is not actually more useful.
It is therefore difficult if not impossible for the checking code
running on the MTA machines to estimate if the query it is about to do
too result in a packet sent to the backbone of if it will be served from
the cache.
Load is the reason that DNS caching and expirations exist. You've just
shot down your own position of needing a significantly lower limit. If
you're going to be processing a lot of email, figure out how to make
your DNS cache larger to avoid forced expirations.
The average cost of the backboe traffic is proportional to the
complexity we allow SPF to have. It can be estimated using weights for
the different queries, but this is a design-time estimate, as at
run-time it cannot be done reliably.
Again, I was not suggesting that the "estimate" happen at run-time.
I still think the macros can cause far more backbone traffic than the
mechanisms themselves, because macros are much less likely to be cacheable.
This is why I gave exists: a higher weight. There is greater potential,
as you've already demonstrated, that macros will explode into a large
number of (one-shot) queries that might force expiration of more useful
queries from your DNS cache. But what I forgot is that someone who
wants to do harm can use macros in any of the mechanisms, thereby
causing the same load. And thus one of the reasons why I've changed my
mind using non-count-of-query weights. Although, I suppose if you
counted the use of a macro in a query against as some additive to the
weight, that would help.
But this makes the limiting formula more complex, which we've already
reached consensus on that we don't want :)
--
Andy Bakun <spf(_at_)leave-it-to-grace(_dot_)com>