At 22:16 2002-01-22 -0700, Joey Jablonski wrote:
Attached in my .procmailrc file...what I'm trying to do is have procmail
take the a mesage recieved with a specific subject and parse certain
information from it and assign that information to variables. The message
will have the following format, I want the info after the colons
Yea, you posted this request like yesterday.
I would then like procmail to execute a shell command, pager-ack, with the
arguments begin the value of the variables.
Which if I recall your previous message, was a perl program. Which means
you have perl at your disposal. Versus piping through egrep (which just
might return multiple lines if the input is barfed) and awk, why not just
take the following (tweak the path as necessary), and dump this into a file
named pager-check.pl with chmod +x for yourself:
------cut below here
#!/usr/local/bin/perl -w
# trivial perl script to extract variables without complicating things
# with pipe series. This is provided not as THE solution to your problem,
# but an example of a differnt approach.
if ( $ARGV[0] )
{
while ( <STDIN> )
{
if ( m/^$ARGV[0]:(.+[a-z0-9].*)/ )
{
print $1;
exit 1;
}
}
}
# nothing spotted
exit 0;
------cut above here
Now, in your .procmailrc:
:0 b
NAME=|./pager-check.pl ack
:0 b
SITE=|./pager-check.pl site
:0 b
NOTE=|./pager-check.pl note
# note leading whitespace checker - your filter, as provided didn't have
# so much as a SPACE between the colon and the following text, which was
# probaby an error. VERBOSE logging would catch this stuff...
# The trailing * on the regexp was removed - it is meaningless in the
# context you provided it, and I presume you EXPECT a trailing 'a', with
# optional "stuff" following it.
# in your original message you indicated that the message always came from
# the same address - if that is so, you should probably want to add that as
# a condition.
:0i
* ^Subject:[ ]*message from a
* ! $NAME ?? ^^^^
* ! $NOTE ?? ^^^^
* ! $SITE ?? ^^^^
| pager-ack $NAME $SITE $NOTE
That should work for you, right off. Of course, you could accomplish the
variable extraction right within procmail, using $MATCH - and doing so will
require fewer cycles because we're not running any external processes (as
well as for other reasons, explained below). Try this recipe:
:0
* ^Subject:[ ]*message from a
{
:0B
* ^note:\/.+[a-z0-9]
{
NOTE=$MATCH
:0B
* ^ack:\/.+[a-z0-9]
{
NAME=$MATCH
:0B
* ^site:\/.+[a-z0-9]
{
SITE=$MATCH
# your script probably doesn't actually read
# the STDIN, so 'i'gnore write errors.
:0i
| pager-ack $NAME $SITE $NOTE
}
}
}
}
(At first glance, the 'a' flag might seem like the way to avoid nesting the
conditions, but it doesn't quite work, because it is based on the last
recipe WITHOUT the 'a').
Okay, so what's with the nesting? Basically, since the delivery to your
pager will require all three variables to match, we can stop processing
variables as soon as one DOESN'T get matched, and therefore not waste
cycles when we know we're not going to want to execute the pager script as
a result. Also, because all the variable extraction occurs only after
we've first matched the subject, we're not wasting time on every OTHER
message which isn't even going to be of interest to us anyway.
If there is optional whitespace between the field names and their values,
which you don't want to include, you need to say so - the expressions are
easily modified to account for that.
#variables for pager-ack
NAME=`cat | egrep "^ack:" | awk '{ print $2 }'`
SITE=`cat | egrep "^site:" | awk '{ print $2 }'`
NOTE=`cat | egrep "^note:" | awk '{ print $2 }'`
Ick. Lose the cat - where the heck do you figure cat is getting its
STDIN? Have you considered even trying this concoction with "VERBOSE=ON"
set in your .procmailrc and then checking the log?
:0
* ^Subject:message from a*
* ^ack:[ ].+[a-z0-9]
* ^site:[ ].+[a-z0-9]
* ^note:[ ].+[a-z0-9]
| pager-ack ${NAME} ${SITE} ${NOTE}
Q: What is your intended purpose of using the empty [] ? Is this an
(incorrect) attempt to deal with leading space?
---
Sean B. Straw / Professional Software Engineering
Procmail disclaimer: <http://www.professional.org/procmail/disclaimer.html>
Please DO NOT carbon me on list replies. I'll get my copy from the list.
_______________________________________________
procmail mailing list
procmail(_at_)lists(_dot_)RWTH-Aachen(_dot_)DE
http://MailMan.RWTH-Aachen.DE/mailman/listinfo/procmail