The idea of running concurrent filters was never a consideration and
IMV, would add more loading, processing, timing and synchronization
overhead that isn't necessary. This is especially the fact that the
current SMTP design does not lending itself to partial data buffering
and processing anyway.
In our DATA filter implementation, it evolved to include events;
drops, timeouts and what we call 'final' events.
Drop event filters handle the cases where the client dropped the line.
Timeout events were added to help filters that were written wrong,
i.e. taking too long. A global timer handled these to help prevent
DATA timeouts. Final events are always processed.
In short, the original sequence ordered loading of filters were not
sufficient the handle the different kinds of filters dependency or
independency and sharing scenarios when locally written and 3rd party
filters are part of the mix.
Example:
[Filter List]
Filter #1 - Check Header Policy
Filter #2 - Check Body Policy
Filter #3 - Add some trace record
Filter #4 - Check DKIM POLICY
Filter #5 - Check other Header/body Policy
....
Filter #N
[Final List] # always run this
Filter #F1 - Always create a backup
Filter #F2 - Act on trace record, if any
Filter #F3 - Check for some special message.
[Drop List]
Filter #D2 - Record this drop in some special trace
All filters have a ACCEPT, PASS, SKIP, DISCARD, KEEP results. PASS,
SKIP allow filters to continue. All filters run with a default result
of PASS. The last filter PASSed results in ACCEPT.
--
Sincerely
Hector Santos
http://www.santronics.com
Dave CROCKER wrote:
seems like the topic is a long way from the original and therefore
worthy of
subject change.
Murray S. Kucherawy wrote:
I don't understand how running the filters in series requires that
the entire message be in a buffer.
MTA buffers the entire message, sends it to filter #1. Filter #1
changes the
body. MTA sends the modified message to #2, including the new body.
This
can only happen if they're in series, and I can't see how it would be
possible if there's not a buffer involved.
Here's an even better example: MTA buffers the entire message, sends
it to
filter #1. Filter #1 orders the message to be rejected (or discarded).
Filter #2 is told "nevermind", and never has to go through the
processing of
the body. For a very large message, this can be a big performance
win, and
again can only happen if they're in series.
Strictly speaking either fully-buffered or partial buffering allows
processes to
be staged in sequence just fine. The only issue is whether the filters
are staged in sequence with early filters feeding later ones.
What full buffering does is to allow the current filter to change an
earlier
part of the message, based on a later part. You can't do that in a partial
buffering (hot potato) model whether the processing is done only on the
current
chunk and is then passed back.
Simple example would be wanting to a header field to the message, based on
something in the body.
d/