spf-discuss
[Top] [All Lists]

Re: Re: draft-ietf-marid-protocol-03 - scope questions and comments

2004-09-29 23:23:14
Hello all. Some of this is review from older stuff on this list and on MARID, but there's nothing really wrong with that. Here's my 2 cents :)

On Sun, Sep 26, 2004 at 02:52:53PM -0600, Commerco WebMaster wrote:
> Should we look at employing an _spfX.domain.tld standard (where X =
> major version - e.g., 1, 2, etc),

Koen Martens:
The use of prefixes does not work nicely with wildcards (in my opinion
that is, others have a different opinion varying from 'it works ok' to
'you shouldnt use wildcards anyway').

--Commerco WebMaster <Webmaster(_at_)Commerco(_dot_)Net> wrote:
Agree.  Wildcards require more planning and work to implement.  I am
guessing that some different DNS server implementations might address
wildcards differently or perhaps not support them at all.


The prefixing supporters were pretty vocal on MARID but in the end the group consensus was to not use them. Prefixing is a cheap way to reuse TXT records without conflicting with the other more legitimate uses of TXT. But, I didn't like it for a few reasons. 1. There aren't really any "more legitimate" uses of TXT. SPF is currently the #1 consumer of TXT records and other uses pale in comparison. TXT was originally defined as a place for human-readable text, but no real applications other than SPF have emerged. 2. If the eventual goal is to get an IN SPF type allocated for us, that negates the need for a prefix. If we go with no-prefix TXT, there's less conversion work and confusion later... the records will just work the same and the name we look up is the same. 3. Underscore is legal for domain names but not host names (A records). So they are fair game, and they have the clever feature of never conflicting with existing host names. But, the underscore tends to throw people because people are used to underscore being illegal. Some people still have old DNS servers that think underscore is illegal and won't let them load the zone file. 4. Wildcards. Prefixing doesn't really conflict with wildcards, it's just that they each sort of cancel out the gains of the other. Wildcards let you cover large areas with the same data (splat a value for any name that people can come up with), while prefixes let you be more specific and separate (fine-tune which TXT records to give out in which situation). If you really need wildcards, you can use them, and the answer will be the same for x.com and _spf.x.com.

Wildcards don't really do what most people want anyway... if you have www IN A 10.2.3.4 and * IN TXT "blah" then the * doesn't return any TXT for www, and you still have to have TXT for any A or MX records you have. Really all a wildcard TXT is good for is if you already have a wildcard A or MX ;)

 And for me the most powerful reason prefixes are not that great is:
5. System Admins are grownups. If a site really wants to publish both SPF TXT records and some other kind of TXT, that site can easily take responsibility for making sure the combined answer is small enough to be served correctly. Requiring a prefix makes more work for everyone, while requiring people who have conflicting TXT records to work it out on their own somehow makes more work only for them, and acceptable workarounds do exist. Most SPF records are small enough to share space in the packet with other TXT records, so SPF is not really consuming all of a scarce resource.


I think if there were more real-world applications for TXT records occuring in nature I would probably be more supportive of prefixes, but the way things are now, I see prefixes as way overkill... it's a strategic answer to a tactical problem we don't really quite have.


> Should we maintain code and SPF TXT records that presume upward
> compatibility in the specification, such that an spf1 aware application
> will simply look at the spf part of the v= and presume that it can
> always use spf1 syntax as was understood in spf1?  While I like this
> approach, I
..snip..
The problem is I think that then at some point you publish v=spf1,
v=spf2.0, v=spf2.1, v=spf3, etc.. Because you don't want to drop support
for older record types as long as people are requesting it. At some
point  it becomes too much for the dns protocol to handle...

Agree.  Which is why establishing a good answer to your original question
seems rather important.


We should probably decide how many record types people should be expected to support at once... if the revisions are coming slowly enough, maybe all the requests will be from 2.0 receivers by the time 3.0 comes out, and we can drop spf1 before starting to use spf3.

There are ways of redirecting the query to a second lookup, so you could have "v=spf1 redirect=_spf1.%{d}" if things start to get crowded.

I don't think we want to be incremental, mostly because 1. new specs *might* actually come from different sources, and 2. new features might cause you to choose different defaults. For example, you could have "v=spf1 +mx +ptr ?all" because some of your mail doesn't come from mx or ptr hosts... but when 2.0 comes out you might have "v=spf2 +mx +ptr +signed:_key.%{d} -all" - which means that if it comes from outside the mx or ptr areas, it must be signed somehow or be rejected. New features might cause people to re-think their defaults and making the spf2 record incremental to the old spf1 record might save a little space, but would prevent people from doing what they want in some cases.


Still, if the group considers the way versions are to roll, forward
compatible semantic design might become more of a consideration.  I am
still not clear as to the best way to implement in an spf1 / spf2.0
world... even less so as spfx.x versions come down the road.  My concern
stems from avoiding being painted into a corner because of implementation
choices.

Given that it's pretty easy to redirect to another TXT record, I'm not worried about conflicts popping up... we can probably deal with that when the time comes. Having multiple versions of SPF TXT with multiple scopes presents a possible problem later, whereas forcing things to be incremental presents an actual limitation now :)


> imagination, rather than technology limits future SPF growth
potential.  It
> seems to me that after a point, asking DNS to deliver too much data is
> going to be a problem for some or all of today's DNS server
implementations.

This has been discussed already for spf1, and was rejected because there
is a lot of overhead attached to making an http connection (setting up a
tcp session requires all these handshake packets, whereas a dns query is
just a one-shot udp exchange: one request, one reply packet). Perhaps we
could think about some kind of udp server that serves policy
information,  but this greatly complicates the work to be done by spf
publishers. The  beauty at this point is that it's relatively simple to
publish spf  records, which allows for rapid deployment (at the sending
side at least).

I certainly agree that DNS was and is entirely the right choice for
delivering spf1 data.

In my original message, I had hoped to show by example that spf1 should
continue to be served in normal DNS UDP packets as always.  My thought
was that as the complexity of future versions necessitated, having
possible alternate paths to data via a provision for pointers in the
syntax might make sense to allow for getting data from other sources when
it would be too much to expect DNS to deliver such a volume of data.
Basically allowing the flexibility to extend via alternate paths if
needed.  A side benefit to doing this might also be to allow TXT records
to migrate users into a formal SPF service down the road.  e.g.,

IN TXT "spf1 ip4:1.2.3.4 - all"
IN TXT "spfx.x/need:service spf=1.2.3.123:nnn"

Perhaps this might be a transitional stepping stone to a more formal SPF
service specification down the road when it gets its own RR.


There are a few different ways to deal with data getting larger, and we haven't exactly exhausted (or even fully tapped) all of them. For example, you can use include/redirect to have multiple records chained together. Or you can use either exists or redirect to get more information from the client (like the actual IP address of the client attempting to send) and just give them a Yes/No answer.

If it doesn't fit in a UDP packet, there is also DNS over TCP. I think it's not well supported now because for the most part nobody cares enough about it, but if it becomes actually needed, that's a possibility. (Not that it's better than HTTP for efficiency, but it's probably already built in to your DNS server and just not working due to some firewall settings :) This is probably another case where "Assume grown-ups run the server" can help - people who need more creative solutions can pay the cost of it without making everything more complicated for everyone.

Some other UDP service besides DNS would be possible, but it would have the same size limitations mostly. Send more than one packet and you have to worry about ordering and retransmit and stuff like that, and you could just go to TCP.

A mechanism that redirects people to HTTP or custom TCP is possible. I don't really object to it... I am just pointing out that there are other options and we may not want to go down that road yet.



This all still gets back to your excellent original question about how
one should implement the next version(s)... Should it be concurrent,
incremental or something else?  As we understand, overburdening DNS with
large UDP or many UDP packets is going to cause its own problems (e.g.,
fallback to TCP, etc).  Deciding how future versions are to be
implemented for existing and possible future environments seems an
important issue to get resolved (or at least clear) and something I hope
will be discussed further by others on this list.


I think separate v=spf1 and v=spf2 records and handling them independently is probably OK for our needs. There is nothing really to stop clients from getting back both and just picking the one they need. 1.0 clients will just ignore the 2.0 info, and 2.0 clients can choose to act on the 2.0 info, or if only a 1.0 record is there, they can fall back into 1.0-compatible processing mode and use it. I think it's something to revisit if we ever find like 5 versions and variants at once and records also get bigger.

--
Greg Connor <gconnor(_at_)nekodojo(_dot_)org>