procmail
[Top] [All Lists]

Re: Uh oh - procmail forcing lock?

1999-11-02 13:40:39
"W. Mark Herrick, Jr." <markh(_at_)va(_dot_)rr(_dot_)com> writes:
I'm thinking this means that I had my cache set too low, and it barfed?

I had about 300 messages stuck in limbo, making my hard drive spin like 
mad. I was forced to kill all 300-odd instances of procmail 
running....Where'd all the messages go?

What signal did you kill them with?  The procmail(1) manpage describes
how it handles the common signals.


This is all I have relating to the cache in my .procmailrc file...

DUPFILE=$BACKUP/dups
CACHEFILE=$BACKUP/.cache
CACHESIZE=500000
GARBAGE=/dev/null


# BEGIN RECIPES

:0 cW: $CACHEFILE.lock
| formail -D $CACHESIZE $CACHEFILE

That recipe should have the 'h' flag, so that procmail doesn't try
to feed the entire message into formail (that could eat some time,
depending on the system and message size).


...
procmail: Forcing lock on "/export/home/twabuse/Mail_Backup/.cache.lock"
procmail: Forcing lock on "/export/home/twabuse/Mail_Backup/.cache.lock"
...

First of all, a larger duplicate cache will be _slower_ because procmail
has to scan the entire thing -- the caches are scanned linearly (hashing
is not a particularly viable solution because we want the cache to have
LRU replacement).  If you need a large cache, I would suggest you single
thread the processing of the e-mail at a higher level.  For example,
you could deliver the messages to a normal file and then periodically
process the file with

        formail -D 50000 $HOME/Mail_Backup/.cache \
                -s procmail $HOME/.procmailrc.real <filename

(Yes, -D works with -s)

Then you don't need the lockfile on the cache, just on the file containing
the messages (or rather, that lockfile will server for both).


Philip Guenther