spf-discuss
[Top] [All Lists]

Re: DNS matters & Wildcards

2005-05-08 07:56:54

On Sat, 7 May 2005, David MacQuigg wrote:

At 11:49 PM 5/6/2005 -0700, william(at)elan.net wrote:
On Fri, 6 May 2005, David MacQuigg wrote:

Original context:
Wildcards aren't incompatible with SPF, they just make SPF more complicated and prone to error. The other methods (CSV and DomainKeys) actually are incompatible, because they use the _namehack.

That statement is not correct. Neither CSV nor DK any more or less incompatible with DNS. And it would also be true that use of _namehack
does not make problem of wildcards any bigger or smaller then what
it is with SPF. I had a long post on this subject during MARID already,
don't have time to search archives, perhaps somebody else can.

As I understand it, a domain name like _client._smtp.*.mydomain.com is illegal in a zone file. This is what you would have to do to make CSV records apply to any name under mydomain.com. That's what I meant when I said wildcards were incompatible with CSV. Same for DomainKeys. SPF avoids the problem by putting its records under mydomain.com. Then *.mydomain.com is legal, and a query for SPF records on any name under mydomain.com should work.

Please insert AFAIK in front of every sentence above. I'm not pretending to be an expert. Just being brief. Corrections are welcome.

It seems significant to me that both CSV and DomainKeys are not worried about using _namehacks.

Next excerpt:
That statement is not correct. Neither CSV nor DK any more or less incompatible with DNS. And it would also be true that use of _namehack
does not make problem of wildcards any bigger or smaller then what
it is with SPF. I had a long post on this subject during MARID already,
don't have time to search archives, perhaps somebody else can.

As I understand it, a domain name like _client._smtp.*.mydomain.com is illegal in a zone file.

No, that is not true. This in fact has already been discussed on this list
before, you have misunderstanding due to not knowing the difference between hostnames and domains. I'll refer you to the real source however:
 http://ops.ietf.org/lists/namedroppers/namedroppers.2002/msg00591.html

Then *.mydomain.com is legal, and a query for SPF records on any name under mydomain.com should work.

*.mydomain.com is legal hostname and carries no special meaning
* when applied to domain name specification has special meaning as wildcard

In other words you can query directly for '*' and it you get the answer on that specific record but for dns protocol its used expansion default record for use with unknown domain zones that are not specifically subdeligated from parent.

We're losing some important context here, so I copied it above. I think what you are saying is that a domain name like _client._smtp.*.mydomain.com is actually legal, but the '*' is treated as an ordinary character, not a wildcard.

Correct!

Then what I'm saying about the incompatibility of CSV with wildcards is still true. We don't get the desired effect. We can't
make the CSV records in _client._smtp apply to any name under mydomain.com.

That is true. But you can still enter *.mydomain.com and it would apply to _client._smtp.1.mdomain.com and _client_smtp.2.mydomain.com. In other words if you use _namehack and you want to use wildcards you end up having to drop namehack and just put it in domainroot.

Use of wildcards is rare, so while with wildcards you don't get any more benefits out of _namehack, without wildcards you do because you do not impede on entire DTYPE namespace.

To me, this is an acceptable limitation. It would not stop me from using a _namehack to provide a unique location for a TXT record.

Right. I had at some point considered this limitation to be important and
thought it means wildcards are a problem with _prefix and gave long technical
arguments about it. Then David Blacka came along and quite simply pointed out that while my technical understanding was correct, I was logically wrong to come to conclusion using _prefix makes it any worth because of it.

Basically:

1. Without _prefix with SPF/TXT:
 a. 99% of users who do not use wildcards have serious chance of collisions
 b. 1% of users who do use wildcards have serious chance of collisions

2. With _prefix with SPF/TXT:
 a. 99% of users who do not use wildcards have low chance of collisions
 b. 1% of users who do use wildcards have serious chance of collisions

So which do you think is better - 1 or 2? Yes definitely the use of _prefix
does not help those in b. who want to use wildcards, but it does not make it any worth either, where as for other users use of _prefix is better!

Again, please let me know if I say something that's not true. I can't preface every sentence with AFAIK. I don't mean you need to find every literal mis-statement, and send me off to hours of unproductive reading. Try to be more constructive: I think what you mean is ... and then re-phrase the statement so it is technically correct, and helps us reach more quickly reach a conclusion. Don't assume the worst possible interpretation of everything I say.

I don't, but I'm worried about other readers. My quick recommendation to you is to avoid making statements and make posts that pose questions for discussion instead.

Lets Compare to HTTP: redundancy, caching, mature technology. Ok, maybe its not UDP but you know there are limitations to UDP as well...

Are you serious?  What would be the advantage of HTTP?

I did not say that my opinion is that using HTTP is better in this case,
I simply pointed out that in many cases it maybe not be worth, where as
it does offer some advantages in case of having to create record on the fly (i.e. using CGI) or make requests with multiple questions. If you look at what Hadmut Danusch evolution of RMX work, he changed from RMX dns record to suggesting RMX HTTP service.

Remember that the reason I answered you in the first place is that you made this remarkably bold statement:

| On Fri, 6 May 2005, David MacQuigg wrote in message | <5(_dot_)2(_dot_)1(_dot_)1(_dot_)0(_dot_)20050506134827(_dot_)0432cea0(_at_)pop(_dot_)mail(_dot_)yahoo(_dot_)com>: | | > DNS has worked remarkably well, and it really is the best way to | > provide an email authentication service.

And I'm pointing out that it is wrong to make such a statement because the
issue is complex and that even for location of email policy records its
not at all clear which kind of service would be best (and as there has
been no alternatives to SPF implemented that do not use dns, we can not make even simple comparisons).

Plus above statement was made even more general since you actually said
"email authentication service" where as email authentication may not even involve dns at all - for example S/MIME and PGP are email authentication services and do not use DNS and rely or crypto certificate and keys distribution and authorization which might be done through ODMR or PGP keyservers or other ways.

And the above is especially true about DNS because of its importance
to the stability of the internet, there are a lot of other protocols
that depend on it and overloading of it may be more dangerous then HTTP.
That said again nothing stops you from building new protocol that is
UDP based and works similar to dns. In fact SES chose that and so did
SIQ proposal.

If you were to design a new system like DNS, but specifically for email authentication, what would you do different?

It may involve specialized binary data that provides answers for specific
questions (i.e. specific scope, specific policy type, specific email user, etc) and may be able to do recursive sub-lookup on its own to other service
locations (i.e. automatically referenced include) and may not have to suffer
with limitations of 512bytes in the size of records.

Would the advantages of the new system overcome the cost of development, deployment, documentation and training, and correction of problems that are inevitable in any new system?

These are all valid points and is the reason why people often at first
propose to reuse existing system. The question then becomes if faster
deployment is worth providing services that is limited in its design
and capabilities and may interfere with other operational aspects of
the Internet. IETF effectively says its not, but for IETF providing fast deployment it almost never one of its goals.

I'll note also that using new types of records will in any case require
documentation and training - using new service would just require more
documentation and more training.

You mentioned the problem of overloading DNS. Is there a way we can remove that load entirely, with less cost than developing a whole new system? Such a solution could put a nice upper limit on cost if our worst nightmare scenario actually happens.

It would be nice to ask dns protocol designers to think of this question and see if they can come up with an answer. But unfortunately as you said,
they are not being very active in wanting to work with us.

Its not good idea to work out technical problems under political pressure,
ends up byteing you in the xxx later and results are solution that is not the best you could have for the problem at hand and may even cause more
problems. I'm happy that politicans express interest in anti-spam matters,
but I think they should stand back for a little bit, especially from
techical matters - the issue is known and being worked on more seriously now.

I agree that development under pressure (political, commercial, whatever) is not the best technically. In the real world, we have to compromise with these pressures.

Typically not political pressures, although there are few examples otherwise
(i.e. Apollo program), but I've not seen them actively given money to us
either if they want to exercise their pressure, they'd be expected to fund
email authentication research and related programming & development.

We may not have another year to work out all the details.

Latest statistics & polls show that SPAM problem has been contained and
amounts seen by users this year are smaller then previous one. It may not
stay this way, but it does seem that filtering methods may have bought us another year or two.

I believe the best solution to this problem (rapid deployment vs technical perfection) is a simple neutral standard that does not favor one or another authentication method but will allow the whole email industry to move forward.

Its possibly that you misunderstand the meaning of standard and how standardization on internet works. Plus what we are often talking are different authentication methods that involve different identities and mix-ups are dangerous (although they can also be beneficial if one is careful, I'm writing paper on this topic, you'll hear more on it later).

Reputation services, spam-filter companies, forwarding services, ISPs, all should be working now on their part of the problem. We just need to let them know what an authentication header will look like. No, it won't be Received-SPF: Think of another, more neutral word.

See 
http://www.ietf.org/internet-drafts/draft-kucherawy-sender-auth-header-02.txt

Its one of the least controversial aspects of email authentication and
one we're most likely to agree one on as you can see from that even now
of email authentication proposals - SID, DK, META all use this header (and possibly one or two other proposals) and SPF is only one that does not!

Previous attempts by me and Julian to advance this in SPF Community have failed. And I did ask already that it be mentioned in SPF draft (or at the very least change current "SHOULD" for SPF-Received header into more general "SHOULD" for recording results in email header but "MAY" use SPF-Received for that).

I don't think the politicians are all bad.  They can get things done.
That problem with the web interface for DNS hosting services? Here's a million dollars. Fix it now.

So far they they are not giving us any money (Meng asked and did not
get it for example, I know couple others who went to get grants and
its not going anywhere because NSF grant budget has been cut almost 50%).

I also have some hope that you might see spf forwarding and couple other
major problems resolved by with new proposals by end of the year for sure, in fact it might even happen a lot sooner.

The problem with forwarders goes away when forwarders do their own authentication. That will be a requirement for compliance with the standard. Think big. Think outside the box.

That is how Meng views it, but it is wrong and actually not how standards
work. You can't impose a standard on existing protocol that breaks existing
infrastructure and expect everyone to follow, you can only do it as update
of existing standard given everyone proper time to transition (which would
take years).

--
William Leibzon
Elan Networks
william(_at_)elan(_dot_)net


<Prev in Thread] Current Thread [Next in Thread>