On Oct 31, 2011, at 1:23 PM, Murray S. Kucherawy wrote:
[mailto:owner-ietf-smtp(_at_)mail(_dot_)imc(_dot_)org] On Behalf Of Steve
Sent: Monday, October 31, 2011 9:24 AM
To: SMTP Interest Group
Subject: Re: draft-atkins-smtp-traffic-control
Yahoo! is currently handling somewhat over 300,000 delivery attempts a
second. You don't need to increase the average cost of a mail delivery
(either by increasing the cost of a delivery attempt or by increasing
the number of delivery attempts per message) by much for it to add
up to significant costs.
I believe it, but to be fair, they also have not come forward to indicate
that this is hurting them. Thus, I have to conclude they can handle it just
I'm not entirely sure they can, but that's not the issue I was discussing, I
was simply using delivery rates to one particular ISP to show that even if the
cost of a single delivery attempt is small, increasing that cost can add up to
significant system resource costs when multiplied by tens of billions of
messages a day.
Reducing delivery delay might be a good thing, if it can be done cheaply.
Nothing huge, but short delays and messages in order are better than long
delays and messages out of order.
Tony suggested that modifying SMTP clients to always retry 4xx responses at a
much higher rate than is currently done would be a solution to that, and one
that would be effectively free on the modern Internet. Which is a decent
observation, for some SMTP servers.
But I think that there are quite a few servers, particularly those we're
concerned about that are already close enough to the performance cliff that
they sometimes need to shed load, where increasing retry rates by an order of
magnitude or two over what's currently done would not be inconsequential.