ietf-smtp
[Top] [All Lists]

RE: Mail Data termination

2011-08-18 17:21:32


-----Original Message-----
From: owner-ietf-smtp(_at_)mail(_dot_)imc(_dot_)org 
[mailto:owner-ietf-smtp(_at_)mail(_dot_)imc(_dot_)org] On Behalf Of Hector 
Santos
Sent: Thursday, August 18, 2011 12:32 PM
To: ietf-smtp(_at_)imc(_dot_)org
Subject: Re: Mail Data termination

First, a server may have Receiver Loading Limits and a client with a
tenacity to take up more CPU time than it needs can get it flagged. To
use CS for the purpose to reduce its redundancy without considering
the load it places on receivers is a very selfish engineering
consideration. Chewing up a channel unnecessarily for 5 minutes
disallowing other clients connection access increases system available
and scalability problems.

Second, it uses a dangerous presumption that a server is FIXED on
using 5 minutes for idle times. 5 minutes is a SHOULD, not a MUST and
for a client to DEPEND on a SERVER using 5 minutes is short-sighted
engineering.

The five-minute timeout is a SHOULD.  If you're resource constrained, I would
argue that issuing a 221 to the most idle open connection and closing it down
so the resources can be re-used is just fine.

Agreed.

But more to the point, you're talking about very different time regimes here.
Caching a connection for anything even close to five minutes is almost
certainly counterproductive - SMTP connection establishment isn't *that*
difficult. But holding on to a connection for just a few seconds can be quite
beneficial for systems handling high volumes of mail.

I don't think the practice of connection caching is particularly selfish when
compared to the cost of having the connection torn down and then 
re-established
with some frequency, when it's generally much cheaper for both the sender and
the receiver to just leave it open.

Exactly. Although it is necessarily up to the client to decide, the server also
benefits as long as the connection isn't cached for very long.

                                Ned

<Prev in Thread] Current Thread [Next in Thread>