ietf-mxcomp
[Top] [All Lists]

Re: Reuse of TXT : draft-ymbk-dns-choices-00.txt

2004-05-18 08:45:25

In 
<C6DDA43B91BFDA49AA2F1E473732113E5DBC9E(_at_)mou1wnexm05(_dot_)vcorp(_dot_)ad(_dot_)vrsn(_dot_)com>
 "Hallam-Baker, Phillip" <pbaker(_at_)verisign(_dot_)com> writes:

 They also may be of the opinion that MARID 
lacks the urgency or necessity to require such an architectural 
violation.

That would be a politically tone deaf move. The external preception
is that the spam problem has been known about for ten years, has
been unacceptable for at least five and has only received attention
from the IETF when two competing solutions with strong external
                     ^^^  Three?
support bases are proposed.

Yeah, it kind of says something when even the US government is more
nibble and quicker to take action on a technology issue than the IETF.



3) Reuse an existing RR and rationally explain our decision 
against the stated concerns.

4) Define a new RR.

I hope that most find either 3 or 4 to be the proper path forward.

I would really like some evidence that option 4) is more than a
"non-starter" as someone else called it.  But, I agree that either 3)
or 4) is the proper path forward.


There are two tactical deployment issues here:

1) Would option 3 lead to a delay due to IETF objections?

I guess I don't see a huge delay here.  De-jure standards have
advantages over de-facto standards, but there is always the petition
congress route to getting official blessings.  :-/  (For folks in other
countries, feel free to petition your government, but the US is where
most of the spammers are.  Maybe someone could just nuke Florida.)


However, on the subject of de-facto standards, two weeks ago in the
Jabber session, Pete Resnick raised concerns about not just the amount
of traffic that such things as the SPF "mx:" mechanism creates, but
also about things like congestion control.  Pete mentioned that he
would try to get some more info on this subject, but so far, I haven't
heard anyone explain this issue.

I have looked at this issue, and I don't see a problem, but I am far
from a DNS expert.  SPF continuing to be deployed, and due to its
momentum, likely to continue to be deployed for at least another 6
months or so even if this WG creates a standard.  So, I think this is
a serious issue.  What problems, if any, is SPF causing with the DNS
infrastructure/operations?



2) Would option 4 lead to a delay due to operational constraints?

This, I think, is key.  If strong evidence can be shown that creating
a new RR would not cause any significant delays, that would be the way
to go.  So far, all the evidence that I've seen points in the wrong
direction. :-<


Ok, this is my third post for the day, so no more until the meeting
tomorrow. 


-wayne