ietf-mxcomp
[Top] [All Lists]

RE: On Extensibility in MARID Records

2004-06-18 14:03:31

On Fri, 2004-06-18 at 10:31, Jim Lyon wrote:
In discussing record sizes, Doug Otis constructs an argument about how
many address ranges or domain name references one could fit into a
512-byte DNS packet.  Sumamrizing his numbers, and filling in the blanks
from our actual proposals, we get:

               theoretical max   SPF today   XML as proposed
               ---------------   ---------   ---------------
Address Ranges              21          20                17
Domain Names                18          17                14

As I said, not all names have the same size, but it does show the
mischief a single record could cause by invoking this number of
redirects.

While I think that Doug believes this damns XML syntax, it actually
shows how reasonable XML syntax is.  There are probably only a handful
of domains on the face of the earth that send mail from more than 17
different address ranges.  Since that handful constitutes the largest
ISPs, any effort spent fetching their records (possibly through a couple
of indirections) can be amortized over a large number of mail messages.
(Doug's claims about bandwidth spent fetching records exceeding
bandwidth spent receiving email fails on this point.)

This fails to consider the number of queries required nor that only
addresses would be referenced in these smaller organization.  Just as
these large mail sites will require sequential queries (this does become
cached), those many smaller sites will have equally many reasons for
also listing more than a single record to be retrieved.  I use several
mail addresses, but only have an ability to send from a few.  If forced
by closed lists, it may become the norm for each site to have more than
one domain referenced in their record. Do you see a potential problem
yet?  Now you add the "report mechanism" and the "spy on remote users
mechanism" and the "where to mail your problem" etc.  These quickly
become more records that will be linked in this nearly endless DNS chain
of records.  No one forced them to add this stuff, but it is for the
consumer of this now serialized stone soup to consume.

Doug then goes on with a bunch of calculations that show that, for
typical sized organizations, they're nowhere close to the limit.

This concept clearly goes beyond a reasonable limit.  How can this
mechanism be employed in transit if there is no consideration of the
workload generated or the problems created?  With many of these lists
likely to remain open to prevent endless problems a closed list will
cause, there will be no benefit whatsoever for those that provide mail
service to consider implementing this expensive mechanism.  

He then says:
The space that is claimed to be available will be consumed by
perhaps a 60 byte XML namespace declaration. This is to allow
vendors the ability to "innovate" and there would be no review
of these declarations or associated payload. (A very bad idea
in my view.)
and later
Okay, now you pick two and then the next vendor picks a different
two.  These great innovations don't fit without expecting these
records to chain and chain and chain and chain and chain and
chain and chain and chain...

This shows several misconceptions:

1. Vendors don't force anyone to publish any extensions.  If there
   are vendor-promulgated extensions, presumably a domain will only
   publish a record that uses them if that domain sees some value
   in it.

Consuming this serialized stuff will be where problems occur.  Now that
you made it well beyond any standards process to control, what is left
but to abandon the entire mechanism.  Might as well; it will never fly.

2. The current spec is carefully written to allow IETF-standardized
   extensions with no penalty.  This sounds like the right balance
   of burdens to me:  the standard way is cheap, and the non-
   standard way costs.

So the standards process can dog-pile on as well as anyone else?

3. Regardless of what happens in this debate, there *will* be
   extensions.  A domain that decides it's useful to publish
   the night-shift janitor's Hilbert number will stick
       night-shift-janitors-Hilbert-number=7
   onto the back of their SPF record.  No standard can keep
   this from happening.  Indeed, domains experimenting with
   doing this is exactly what leads to follow-on standards.
   A major point of XML is that it provides a way for
   independent organizations to do this, *without needing
   to first coordinate with each other*.

Use a SRV record and then no one will be "sticking" in anything that was
not specified.  How is this dog-pile is a good thing?  

If it becomes the norm for domains to include other domains in the
description of outbound mail to accommodate desires of users to use
their desired address, then even if these lists become expressed as
closed, the query process may never converge.  It will become an endless
search for the next record. This then implies a need for search
algorithms to handle great depths of recursion and redirection.  Of
course, a loop must be detected and there must be some limit assigned to
a depth this process may extend.  If I point to domain X because I have
an mail address there, then I find they have pointed to six other
domains because of their relationships, and each of these points to yet
more domains, where does it end?

Now you want to add to this by suggesting it is okay to grow these
records to experiment and innovate without coordinating with any other
vendor?  This process accomplishes nothing if it is only somewhat
possible at the MUA.  It does not deter any of the common abuses nor
does it allow any follow-up should there be a problem nor does it
mitigate any of the damage.

Again, the Fenton proposal for ensuring the identity of the user solves
this problem in a much cleaner fashion.  To prevent the abuse,
authenticate the MTA using a simple SRV record.  One well defined and
controlled query one time per session.

-Doug





<Prev in Thread] Current Thread [Next in Thread>