spf-discuss
[Top] [All Lists]

Re: Performance issues

2004-02-17 13:03:58

----- Original Message ----- 
From: "Theo Schlossnagle" <jesus(_at_)omniti(_dot_)com>
To: <spf-discuss(_at_)v2(_dot_)listbox(_dot_)com>
Cc: "Theo Schlossnagle" <jesus(_at_)omniti(_dot_)com>
Sent: Monday, February 16, 2004 8:01 PM
Subject: Re: [SPF-Discuss] Performance issues


Yes and no.  The faster you "perform" the transaction, the better the
scalability.  The truth is that while the transaction may take 1, 10 or
60 seconds, you are really only "performing" for about 10ms -- the rest
of the time you can be performing other tasks.  The total transaction
time is (almost) irrelevant.  Let's analyze:

Theo, I don't wish to dive much of this detail discussion that I don't think
is revalent, but just to say, that unless your multi-threaded application is
finely tuned, I mean completely optimized for async I/O, makes extensive use
of kernel wait events, minimal exclusive locks (critical sections,
reader/grabbers, etc),  your default thread time slicing is at the mercy of
the  OS scheduler and the default and minimum quantum time is 10ms (13-15ms
in multi-processors).  That quantum time is only improved by finely tuned
asynchronous I/O but yet kernel synchronized threaded applications.


It is not just about what individual thread can handle efficiency, but that
there are other details not considered in your message; client/server
reaction time and many other issues.  Just consider worker thread queuing
designs, so if X worker threads are designated for receiving,  when one
thread is in session, that is one less thread available regardless of how
long or little it takes, i.e., there is no unlimited # of threads.

So yes, there will be lots of idle time, but that is only because of the
total time between a client/server.   No matter how fast a thread can run,
it is still limited by how fast the client is talking and reacting to the
server.   Understand?

The total transaction time in short is:

    ttt =   CT + ST

where

CT is the client time to issue request (including reacting to responses),
and

ST is the server request processing time

from start to finish.

Now, what I am more interested in improving are the possible delay factors
within ST which are "external" to the SMTP processing itself or required
server-side dependendencies (i.e, user look ups, etc).

These external processing is mostly related to DNS and by emperical data,
this is a large part of the overhead time largely due to failed look ups.
DNS client caching helps tremendously when there is many hits by the same
system within a short relative period.    If you have any tips you can
provide on how to best "optimize" a DNS server or DNS client for better
failed responses, I would love to hear them.

Yes, much of this ST time is can be eliminated by removing other testing
methods, but I don't think we are there yet with SPF and compared to RBL,
both serve by providing different sets of information.  Since SPF does not
resolve the compliant SPF spammer,  thus RBL will still need to be available
to report the compliant SPF spammer as an abusive system.

Finally, there is no way on earth we will remove our final test of the
suite - CBV which does the call back verification.  This is proven without a
doubt, to be a great defense against spoofed return path addresses, not just
spoofed return path domains as LMAP based solutions can only address.  But
the goal of even including the other first level test, is to remove any
overhead/redundancy or need to perform a CBV test.

In any case, we made our wcSAP suite of anti-spam methods flexible enough
for admins to choose and decide for themselves.

Thanks for your comments

-- 
Hector Santos, Santronics Software, Inc.
http://www.santronics.com






<Prev in Thread] Current Thread [Next in Thread>