spf-discuss
[Top] [All Lists]

RE: Re: Draft ammendments on DNS lookup limits

2005-03-21 19:36:52
-----Original Message-----
From: owner-spf-discuss(_at_)v2(_dot_)listbox(_dot_)com
[mailto:owner-spf-discuss(_at_)v2(_dot_)listbox(_dot_)com]On Behalf Of Radu 
Hociung
Sent: Monday, March 21, 2005 6:13 PM
To: spf-discuss(_at_)v2(_dot_)listbox(_dot_)com
Subject: Re: [spf-discuss] Re: Draft ammendments on DNS lookup limits




FWIW I believe that Radu Hociung has made it abundantly clear that
indiscriminate inclusion of policy from outside one's sphere of
influence is largely a fool's errand.  Flattening has little to do
with the folly.


And yet he insists that I must do that to conform to his version of a
reasonable number of DNS queries.

No, I never did that... Please find where I said it's ok to flatten
across administrative boundaries. I showed that it _could_ be done to
temporarily alleviate an expensive record outside your control. The
other reason I showed it is to make the point the the spfcompiler SHOULD
and DOES make the difference between compiling, and flattening, and that
it does respect the boundaries of administrative control if you don't
force it to -flatten.

I also said that there are still bugs in it, so it's current output may
not be 100% correct.

And I did give you the compiled record (without -flatten) as well.

I did say that it should be strongly recommended (I believe I used the
termed smacked) that your ISP reduce their 11 lookup list of A
mechanisms down to a list of 10 IP's.

Radu.


When I say that 10 is too few because of ... and your answer is
that 10 is
fine because I can publish a record that turns the included
record into ip
addresses, how else am I to interpret that.

This is how you should interpret it. Has anyone else interpreted what I
said like Scott did? I apologize if my language is unclear at time.

No, I don't think it's unclear at all.  I say 10 is not enough.  You say yes
it is because you can do it this way.  Very clear.  I fail to understand how
you think you can have it both ways.

A limit of 10 is fine, because you should smack your ISP
(megapathdsl.net) to publish a less expensive record, and then, your
include:megapathdsl.net mechanism will only cost 1 TXT query, instead of
the 11 or so queries it requires now.

SHOULD is the correct word.  They should and I have asked, but in the
meantime, 10 is not enough.

I have stipulated that ISPs should always publish cheap records, because
they HAVE CONTROL OF THEIR SERVERS. They rarely rely on the services of
other domains. That's why they are ISP. Just the way aol.com and
earthlink.net have figured it out, maybe your ISP can too. I'm willing
to bet that they will see the light. Eventually.

Yes, since it's also their DNS that gets more load from more expensive
records, I expect that they will see the light eventually.

It boggles my mind that you provide help on spf-help, yet you cannot
help your own ISP understand what SPF is about. I understand that _you_
would have to offer the help, instead of waiting for them to ask.

Then I guess you are easily boggled.  Until Frank pointed out that my
current record is broken per the processing limits in the latest IETF draft,
I really hadn't paid that much attention to the processing limits (my
mistake).  I am now.  And I can help my own ISP.  I have in the past
(although I haven't gotten very far, I'm at least getting the to document
their oddities).

I think your goal of encouraging organizations to publish less expensive
records is a good one.  I think this discussion has had value (I discovered
that my record is broken, if nothing else).  I just think you are to focused
on one aspect of the overall design of SPF.

As I've said in another post, we COULD deprecate all the
mechanisms except
ip4 and ip6 and then give RMX another try, but I don't think it
would be a
good idea.

This idea is entirely yours, I never alluded to anything like that, with
the exception of suggesting that maybe the mx mechanism is not as useful
as it first appeared. I don't think I even suggested that it be removed.
I said I might support such an initiative.

You are correct, you didn't say that, but it does seem to be the logical
point towards which you are driving.

You can't have it both ways, either the limit has to be high enough to
support including complex policies that currently exist because
flattening
across administrative boundaries is a bad idea or your limit is OK and
flattening across administrative boundaries is OK too.

Which do you want?

I prefer if you read what I say a second time before replying. Perhaps
try to understand what I mean.

I don't think there's any mystery about that.

Look, this is devolving into a flame war that I really hadn't intended.

I believe that processing limits need another look.  At this point, I don't
like either what's in the current draft nor your proposal.

I'd like to see your stats run on a larger sample size (Wayne has a big list
of domains that  he's used to estimate SPF publishing rates, perhaps that).

I'd like to see an approach to limiting the DNS cost of records that doesn't
cross administrative boundaries (separate limits for the include).

Limiting the impact on DNS load is good.  It's one of the major criticisms
of SPF.

If you focus to much on limiting DNS load, you are going to over-optimize
for that at the expense of administrative simplicity and record accuracy.

Can we move on?

Scott Kitterman