I don't think one can tell whether a particular client is connection caching by
observing protocol activity in terms of time between commands/disconnect. I
also don't think one can tell based on those same data whether any delays are
caused by deliberate software action, CPU scheduling, network latency, or
anything else. I might make that conclusion if I saw a client sending periodic
RSET or NOOP commands, but I haven't seen any evidence of such.
If someone thinks (for example) that Facebook might be connection caching and
wants to find out for sure, one could always go and ask them.
I also have my doubts that the bad guys are implementing connection caching.
Spammers have such enormous compute and network power at their disposal (when
using botnets, at least) that they are probably not thinking too much about
efficiency in terms of establishing new connections, which only adds complexity
and makes their code bigger. And, as far as I'm aware, bots typically try to
do direct connections rather than relaying, and they tend to carry their own
MTAs and even TCP stacks with them to avoid detection. That means they're not
using the open source MTAs that tend to implement connection caching, so those
connections that do show some idle time are probably not the bad guys.
So, so far, I haven't seen anything that warrants consideration in terms of
updating a standard. I suggest that there are way too many assumptions being
made about the data to warrant much action from the IETF, even something
informational.
This is starting to sound like a conversation the ASRG might enjoy rather than
this forum.
From: owner-ietf-smtp(_at_)mail(_dot_)imc(_dot_)org
[mailto:owner-ietf-smtp(_at_)mail(_dot_)imc(_dot_)org] On Behalf Of Keith Moore
Sent: Saturday, August 27, 2011 7:31 AM
To: Keith Moore
Cc: Hector Santos; ietf-smtp(_at_)imc(_dot_)org
Subject: Re: FWIW Connection Sharing Stats
On Aug 27, 2011, at 9:53 AM, Keith Moore wrote:
On Aug 26, 2011, at 8:18 PM, Hector Santos wrote:
Among the spammers rejected with delays, many do try a new transaction with the
same failed result. Is this CS client behavior? I seems to appear that way.
Maybe the spammers are trying to deal with greylisting?
For that matter, I wonder if legitimate senders are trying to deal with
greylisting also. Maybe there are poor implementations of greylisting that
block legitimate traffic too often. Maybe there are some large-volume senders
that don't want to deal with having greylisted mail in their queues any longer
than necessary. If retrying after a few seconds works on some greylists, it's
not surprising that some senders would start doing it.
Keith