nmh-workers
[Top] [All Lists]

Re: [Nmh-workers] A useless but interesting exercise: Design MH from scratch in the 2014 context

2014-02-19 13:34:31

On Feb 19, 2014, at 8:22 AM, norm(_at_)dad(_dot_)org wrote:

Suppose, you weren't designing a system to run on a time shared PDF 1145,
but on a
single user, multi-core system. Suppose that you  had multi gigabyte disks
available.

MH is not resource constrained by CPU, or memory for that matter, so I would 
change nothing.

As for disk space, how much you use is a function of the types of email you 
receive and how much of a pack rat you are.  This is not a function of your 
MUA.  (Well, it might be, indirectly.  If you are forced to use an MUA that 
really really sucks, odds are you won't be saving many messages.)

But also suppose you had to worry about distributed data and
processing.

This I don't understand.  How would reading/storing/replying/searching e-mail 
require distributed processing?

And what do you mean by distributed data?  You can already store your Mail 
hierarchy on (say) NFS mounted filesystems.  Are you talking about supporting 
multiple storage back-ends (i.e. IMAP)?  I'm not convinced this would work.  
You are always going to run into back ends that don't support some part of the 
required semantics (think message annotations with IMAP), so ultimately you end 
up crippling MH's functionality.

If I was going to do it all over again, most of the changes I would make would 
be to streamline the commands and their interfaces to make them a bit more 
amenable to scripted processing of the messages.  It would also have the 
hindsight of MIME not being an add-on.  Perhaps the most intrusive change I 
would inflict would be to make the whole environment UTF-8 from end-to-end.

--lyndon

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Nmh-workers mailing list
Nmh-workers(_at_)nongnu(_dot_)org
https://lists.nongnu.org/mailman/listinfo/nmh-workers
<Prev in Thread] Current Thread [Next in Thread>