On Thu, 2004-07-08 at 15:16, Greg Connor wrote:
--Douglas Otis <dotis(_at_)mail-abuse(_dot_)org> wrote:
Could you explain the premise used for this assumption?
No. I am not making an assumption, just dismissing yours ;)
Based upon what? You should at least attempt to support this conjecture
as to the number of queries. If this is a valid position, then the
draft should reflect these constraints. You may notice, it does not.
Hmm, in that case I would be interested to see any information that
leads a reasonable person to believe that the relationship is not
linear. (hint: y=20x is a linear relationship.
In reference to the information added to the core document
http://www.ietf.org/internet-drafts/draft-ietf-marid-core-01.txt
Pg. 18:
5.4 Recursion Limitations
Evaluation of many of the mechanisms in section 5.1 will require
additional DNS lookups. To avoid infinite recursion, and to avoid
certain denial of service attacks, an MTA or other processor SHOULD
limit the total number of DNS lookups that it is willing to perform
in the course of a single authentication. Such a limit SHOULD allow
for at least 20 lookups. If such a limit is exceeded, the result of
authentication MUST be "hardError".
MTAs or other processors MAY also impose a limit on the maximum
amount of elapsed time to perform an authentication. Such a limit
SHOULD allow at least 10 seconds. If such a limit is exceeded, the
result of authentication SHOULD be "transientError".
Domains publishing records SHOULD keep the number of DNS lookups to
less than 20. Domains publishing records that are intended for use
as the target of "indirect" elements SHOULD keep the number of DNS
lookups to less than 10.
Using an assumption that Sender-ID records are able to limit the
required queries to 10 and set the "transientError" timeout at 10
seconds, what will be the impact on network integrity?
The transport for SMTP uses TCP, but now Sender-ID interjects 10 UDP
queries per message that "must" occur or the connection suffers a
temporary error event requiring a repeat of a message transfer. The
default for a resolver lookup timeout is typically set to 5 seconds (a
minimum of 2 seconds) as RFC 1035 recommends, where 10 seconds is
considered worst case. Should there be a another timeout, this limit
often doubles. The connection is tasked with making 10 UDP queries
within 10 seconds however.
Should the network suffer a 5% packet loss rate, then 1 packet will be
lost on average within these 10 lookups. This will invoke the 5 second
timeout which then reduces the time for the remainder of the queries
(possibly leaving half a second apiece to resolve each new query).
Depending upon the distribution of lost packets, the connection could be
lost after reception of the first message, only to be retried again
later and could make a large transfer of messages impractical. TCP used
for SMTP could deal with this packet loss rate, but because of Sender-ID
and these many queries with a short timeout, the integrity of SMTP is
significantly reduced.
If the timeout are restored to normal limits for these DNS queries,
Sender-ID may dramatically reduce performance of the SMTP server and the
reason for imposing the hazardous time constraint. The trade-off
becomes either a possibly dramatic reduction in performance of the SMTP
server, or a dramatic reduction in the network integrity for the SMTP
server during times of network congestion. The use of the RFC 2822
identities for Sender-ID means messages are transferred regardless
whether they become rejected. A loss of integrity will then add to
network congestion and thereby possibly leading to a collapse of the
network due to the non-linear effect caused by congestion, redundant
transfers by Sender-ID, and the reduced network integrity.
I interpret the above as "Given sufficiently bad networks and servers, DNS
queries might take a long time." Of course SenderID would be affected by
this. So would CSV.
I did not say sufficiently bad networks and servers. I said congestion
at 5% loss. This rate does not change behaviors of congestion
algorithms significantly. As you may know, UDP traffic does not have
congestion avoidance, and yet there could be a blizzard of UDP traffic
with Sender-ID. Add to this, repeated transfers of messages exhibited
during times of congestion exacerbated by the reduced integrity of SMTP
when using Sender-ID. Breakdown of the network throughput would be a
complex non-linear function for this scenario.
CSV would reject a connection without affecting the timeout on the DNS
query and compensate for DNS overhead by blocking transfer of all
unwanted messages. Nor would CSV require information to be repeated.
Nor would there be such a predominance of UDP traffic with CSV. But
this is not about CSV. This is about Sender-ID.
This does not show the relationship is non-linear. Do you need more
information as to what linear means, or are you ignoring the point and
spreading FUD on purpose?
You have not offered simulations for traffic flow to counter this
claim. You have not offered a premise for your assumptions regarding
the number of queries needed. An effective proponent would not resort
to personal berating to dissuade critics.
"The rare happiness of times, when we may think what we please, and
express what we think." Cornelius Tacitus
-Doug