On Mon, 27 Apr 1998, Earl Hood wrote:
On April 25, 1998 at 18:46, Dataweaver wrote:
I run a majordomo mailing list that has built up an extensive archive
over the years (there are currently four volumes of digests, each
containing approximately a thousand digests); obviously, the archives
have started to become unusable due to their sheer bulk, so I started
looking around for a web-based archive interface to better organize
everything and found mhonarc.
Unfortunately, MHonArc can't handle the size involved; I keep getting
out of memory errors...
The common practice to to create multiple archives for a given list
broken up by month. This way archive sizes stay reasonable and
OK; I'll try that. BTW, does anyone have a script that I could use in a
procmail to add individual mails to the current month's archive, and to
start up a new archive each month?
o Try the -savemem option. Execution is slower (due to more
file I/O), but it may be helpful. -savemem causes mhonarc
to dump converted message data to disk instead of keeping
it all in memory. Note, data associated with index, threads,
and navigation are still kept in memory.
I'll look into that.
o Break up input data into smaller chunks and run mhonarc
on the chunks in sequence (using -add after the first chunk).
That's what I did the first time.
Also, when I first started setting this up, I tried using a resource file
to customize the look of my pages, and got an out of memory error when I
tried running it (on a single e-mail message).
In your resource file, make sure you properly close all non-empty
resource elements (review the RCFILE resource page on file syntax).
If a resource element is not properly closed, it may contain some
huge value. And depending on the element, the value could cause
mhonarc to do unpredictable, or undesirable, things.
I double-checked, and the close tags were all there. Mind you, some of
the elements were rather large anyway (I have an extensive page header
that I prefer to use)...
As far as that goes, though, I'd prefer to have the index and thread pages
generated on the fly by a seperate CGI script - the savings in disk space
are more than enough to justify the extra processing time. Could someone
point me to the parts of mhonarc that deal with the creation of the index
and thread pages?
Earl Hood | University of California: Irvine
ehood(_at_)medusa(_dot_)acs(_dot_)uci(_dot_)edu | Electronic
http://www.oac.uci.edu/indiv/ehood/ | Dabbler of SGML/WWW/Perl/MIME
---- Jonathan Lang <traveler(_at_)io(_dot_)com> ---- x ------- alias:
Webpage: http://www.io.com/~traveler /@\ The Dogma of Otherness insists
GURPSnet's Benevolent Tyrant for Life ~~~ that all voices deserve a hearing,
FAQ: http://www.io.com/~ftp/GURPSnet/www | that all points of view have
Archive: http://www.io.com/~ftp/GURPSnet | something of value to offer.
submit new files to gurpsnet-files(_at_)io(_dot_)com | --David Brin,