From: "Harald Tveit Alvestrand" <harald(_at_)alvestrand(_dot_)no>
hmm.... "scaling is the ultimate problem", says Mike O'Dell.
If you take 30 seconds to accept a message, and have the default Postfix
process limit of 100, you can accept up to 300 messages a minute - or an
average of 5 per second.
If you do this, and others follow your example, you're imposing a limit on
the SENDING MTA of 5 messages per second (if it's Postfix).
If you have large resources on the sending side, you can open up
connections until you run out of process descriptors, port numbers of
something else - but it's a heavy demand on a busy mail system to increase
the average transaction time from sub-seconds to more than 30 seconds.
I hope most of the recipient MTAs for IETF mail don't do this.
This is my opinion (of course).
Servers load requirements are not dependent on client loading requirements,
Whether its 5 MPS or 10 MPS or even 1 MPS, the ultimate load rate is what
the server defines or designed or configured to handle, not what the client
defines. A system may not want to receive 5 MPS from just anyone.
Yet they need to work together. We need to get over the fact that server
functionality is growing. Whether it is because of integration in the name
of improved client/server handshaking, security, authentication, payload
analysis or what have you, it is a reality. It is happening, that's a given,
so as engineers, we will need to start thinking about it. Not ignore it or
tell people "don't do it." That is simply not going to work. Keep in mind
that not everyone is using a SMTP operation where all transactions are
accepted for delayed analysis or verification in the name of scalability for
their particular setup. Improved hardware and OSes do allow for very
efficient job delegation.
I think if a client is "slowed down" because of server delays, at the very
least, we need to begin to train servers to provide feedback to the client.
This feedback is the response codes.
Hector Santos, Santronics Software, Inc.