Am taking further followups to ietf-smtp(_at_)imc(_dot_)org where it belongs...
On Mon, 06 Aug 2001 10:11:33 +1200, Franck Martin said:
The first principle, is not to store a mail part indifinitively, but to
timeout it in the queue. The parameter should be left configurable. Let's
say that the mail sender has 5mn to resume the session or it has to start
from the beginning again. This would allow that the queue doesn't grow out
of proportion because of errors.
Is there any evidence that it is common enough to have a network outage severe
enough that the TCP connection gets broken, but that within 5 minutes it's
The second principle, is that most servers do a content analysis based on
the line received. The mail server, needs to know where it is in the
session. Are we still in the headers or already in the content? Based on
content anylysis of the mail being received the server may send an error
back, like "Error content rejected", "Malformed error", "Message to big".
The server trash the message and clear the session. This would take care of
the sircam virus, like on my system where I block attachments with .exe or
.lnk extensions. This is done easily on the fly while receiving the e-mails.
Here the e-mail does not need to be fully received but as soon as a
condition is met the session is closed.
Closing the session on the fly is *very* anti-standard. Also, note that if
you close it on the fly, a standard conforming MTA will *retry* it. Over and
over, as you keep closing the session on the fly. A much better solution is
to wait until the final '.' is received, and THEN do something interesting.
See RFC2821, section 4.2.5 for details.
I understand that a mail server with RESUME capability may need a bigger
queue than one without, but HD space is cheap, and it saves huge amount of
The problem is that it may be cheap to say "add another 500M to the spool"
for a *small* system. However, when you start talking about *large* mail
systems, scaling becomes an important factor. I spent some time at a recent
Sendmail workshop, and some of the people there were working on systems that
had to do a million deliveries an hour. At these levels, even saying "just move
it to another queue for later" becomes problematic - it takes a lot of I/O to
sync everything to disk...
And remember - these "big systems" are the ones you're probably having the
trouble talking to.
So What do you think? Anybody that support the idea and want to develop it
Well.. I still think that the *proper* solution is to fix the TCP infrastructure
so that there isn't a NEED for a RESUME....
Operating Systems Analyst
Description: PGP signature