ietf-smtp
[Top] [All Lists]

Re: productivity?

2011-08-24 08:23:47

Paul Smith wrote:

Just because something isn't seen as an immediate problem doesn't mean it may not become so in the future. Hopefully we've learned from the previous things that have become abusive that it's better to strike early rather than leave it until it's too late.

+1.

We don't even now if the bad guys took the idea a step further already - you don't need to wait after the 1st transaction, just wait after each state, 5 seconds at each state perhaps.

Just consider what is the maximum allowed SMTP session time is by SMTP standards using the minimum commands:

    EHLO   5 mins
    MAIL   5 mins
    RCPT   5 mins * N  where N is total recipients (default 1)
    DATA   5 mins
    EOD   10 mins
    QUIT   5 mins

  SMTP Standard Allowed Time = 30 + 5*N

With N=1, SMTP by technical standards, a client is allowed to use 35 minutes and still be 100% compliant!

The times were set for the era with things were slower, network related issues higher. My first TCP/IP programming book had an idiom (related to built-in error correction)

          There is no guarantee packets will reach its end point,
          but if it does, its guaranteed to be correct.

Another major point, early on, there was more hubbing, hub to hub, more path routing, UUCP, etc and not everyone had SMTP servers where we can all go direct.

Quite frankly, its easy to see why timeouts are high when it was done in an age where Telneting to the console based client/server protocols was par for the course. It would be extremely irritating to be developing this stuff using short time outs closing your interactive session! Even today, I can't imagine a internet C/S console based protocol implementator not using telnet to test their wares. Can you imagine getting knock out even at 1 minute?

So spammers have probably already exploited this but not going overboard and just use seconds. I have noticed the delays between rejected RCPT in my logs, with patterns:

    C: RCPT TO: <FOOEY1>
    S: 550 Sorry Charlie! Not here!
    C: RCPT TO: <FOOEY2>            <<--- 2-5 seconds
    S: 550 Sorry Charlie! Not here!
    C: RCPT TO: <FOOEY3>            <<--- 2-5 seconds
    S: 550 Sorry Charlie! Not here!
    C: RCPT TO: <FOOEY4>            <<--- 2-5 seconds
    S: 550 Sorry Charlie! Not here!
    C: QUIT                         <<--- microsecs!

So this Connection Holding For Mail concept need not apply to after the 1st transaction. It can be spread out.

Caching connections to or from a small company's mail server is going to be a waste of their resources 99.99% of the time, as it is extremely unlikely that there'll be two messages soon after each other between the same two MTAs. It's a different matter if you're talking about Gmail sending to Hotmail where connection caching could be a huge benefit, but most mail servers aren't in that situation. In my view connection caching should always be disabled by default, and allow the admins to set up cached connections to other specific domains (or automate it based on how many times a cached connection would have been useful). Having a default of always caching connections is just wrong.

Facebook is currently the #1 sender of mail on our server. Each session has a near perfect 5 second delay before QUIT and I have yet so see one with a 2nd transaction. It is definitely contributed to the increase average and this month we have 4-5 DoS attacks and serious amount sparse 421. Who knows if Facebook with the others holding idle for long periods has contributed to 421s. Without a detail analysis its hard to tell. But what I thought was part for the course in dealing with this Spam World, now I am wondering how much of this is due these Connection Sharing software.

I do realise that there are people here who think connection caching is good 100% of the time, so I'm probably flogging a dead horse. Maybe because I'm coming at it from the viewpoint of companies with 5-50 users, rather than from ISPs or big companies I have a different viewpoint.

Paul, don't water yourself down. There will be no internet (as we know it today) if it wasn't the millions of the smaller systems combined compare to the relative far fewer larger mail handlers. So we are all in this together and we are all important. Its everyone's problem, some will be more sensitive to it than others. Companies of all size TCO is always important. It means more computers and resources and the direction is to scale up, with less computers as a very big way to lower your TCO. A "What If" analysis will show if this is cost factor and a cost reduction and once it is highlighted, most CEO/CTO and operations managers will look into this. I don't be surprise it if equates to a cost factor measured by CPU+RAM simply by lowering the timeouts allowed by SMTP.

(PS - for the discussion about ephemeral ports, reusing connections is only useful if you can. If you hold a connection open for 5 seconds longer than necessary and don't reuse it, then that's 5 seconds longer before that port number can be reused - so ephemeral port limits can be an argument against connection caching as well)

For socket operations that do not (or can't) use the socket option SO_REUSEADDR to reuse the port in the TIME_WAIT state or closes the socket first which put it into TIME_WAIT, yes, the delays increase port exhaustion potential.

--
Sincerely

Hector Santos
http://www.santronics.com


<Prev in Thread] Current Thread [Next in Thread>