nmh-workers
[Top] [All Lists]

Re: locking rcvstore?

2002-07-12 09:46:45
-----BEGIN PGP SIGNED MESSAGE-----


"Neil" == Neil W Rickert <rickert+nmh(_at_)cs(_dot_)niu(_dot_)edu> writes:
    Neil> Michael Richardson 
<mcr(_at_)sandelman(_dot_)ottawa(_dot_)on(_dot_)ca> wrote:


    >> Aside from trashing sequences (which I've experienced on occasion, no 
idea
    >> why)
    >> I've run into situations where I wind up doing an "inc" from two 
difference
    >> sources into the same folder. Usually due to impatience on my part. 

    Neil> As far as I know, "inc" does to file locking, at least when writing
    Neil> to the standard unix mailbox.

  That's the source of the data.
  I'm talking about putting stuff into +inbox.

    >> The result was a mess of two processes using the same message numbers!

    Neil> That should not be possible.  I haven't looked at the code.  But it
    Neil> should be opening the file with "O_CREAT", which should fail if the
    Neil> message already exists.

    Neil> Unless there are major code deficiencies, creating new messages
    Neil> should depend on the atomicity of unix file creation.  No additional
    Neil> locking should be required.  That is supposed to be one of the
    Neil> benefits of one message per file.

  Hmm. That sounds reasonable. 
  Perhaps I'll run a test and see....

]       ON HUMILITY: to err is human. To moo, bovine.           |  firewalls  [
]   Michael Richardson, Sandelman Software Works, Ottawa, ON    |net architect[
] mcr(_at_)sandelman(_dot_)ottawa(_dot_)on(_dot_)ca 
http://www.sandelman.ottawa.on.ca/ |device driver[
] panic("Just another NetBSD/notebook using, kernel hacking, security guy");  [




-----BEGIN PGP SIGNATURE-----
Version: 2.6.3ia
Charset: latin1
Comment: Finger me for keys

iQCVAwUBPS8Gu4qHRg3pndX9AQGh7QP/Tmc4vMvDn8PaFMb7gicABimtNh+qFyEJ
fenFVV/fY35aC2oPZBUv+X7T4jiG0qD1QZFbXt8zsmeNVFDKtEbPgtliXywSrmzu
esFFqRmtqHb0WWsUQIkNVtnbVWn9z4p8qVDh2FRG3ZU4R1yk/TkLnpChzbIL/902
TctPhWEYY9M=
=fSd0
-----END PGP SIGNATURE-----


<Prev in Thread] Current Thread [Next in Thread>