Is there any reason to use a regex for *static* keywords from the shorter list?
Wouldn't you simply hash that using xsl:key, then
use the key() function to determine whether regex-group(1) is a key? Presumably
you'd lower-case everything, etc. In fact, putting
all the keywords in a regex is likely to work badly. I don't know if a regex
engine is going to optimize an "or" with 300 words in it...it's
probably going to assume that you wouldn't be using a regex to check for
equality with 1 of 300 words!
________________________________________
From: Dave Pawson [davep(_at_)dpawson(_dot_)co(_dot_)uk]
Sent: Thursday, April 07, 2011 10:57 AM
To: xsl-list(_at_)lists(_dot_)mulberrytech(_dot_)com
Cc: mike(_at_)saxonica(_dot_)com
Subject: Re: [xsl] Processing two documents, which order?
On Thu, 07 Apr 2011 15:25:55 +0100
Michael Kay <mike(_at_)saxonica(_dot_)com> wrote:
On 07/04/2011 14:25, Dave Pawson wrote:
I have two xml documents.
The first is a list of marked up words (1),
the second a 'normal' xml document (2)
For each occurrence in 2 of a word from 1
I need to mark up the word with<property> </property>
Which order is anywhere near optimum?
Document 1 has about 300 words,
Document 2 is 33,000 lines.
I'm having trouble seeing how this description of the problem relates
to the code given below.
From first principles, if you do a nested loop then you're doing
either 300*33000 operations or 33000*300 - its not a big difference
either way. On the other hand if you use keys, then you are basically
doing 300+33000 operations either way - but the key will be smaller
if you build it on the smaller document, so that's what I would do.
Using regex matching with a dynamically computed regex looks like bad
news - or is it really a regex in the source document? Saxon
precompiles the regex if it's known statically, but if not there's no
caching or anything - it gets compiled on each use. From this
viewpoint, using each regex once (in a single analyze-string call) is
going to be better.
Michael Kay
Saxonica
The regex is required as I see it to determine starting and ending
conditions for the 300 'words'? I don't see how one...
Could I build and hold 300 regexen for later use, is that what
you were thinking Mike?
I'm still unsure of the approach though.
1. Build the keys on the smaller list of words
2. ??? build the sequence of regexen?
3. then....
AFAICT I'm still going to have to process the entire long document
with each regex in the sequence?
Confused of Chorley.
--
regards
--
Dave Pawson
XSLT XSL-FO FAQ.
http://www.dpawson.co.uk
--~------------------------------------------------------------------
XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list
To unsubscribe, go to: http://lists.mulberrytech.com/xsl-list/
or e-mail: <mailto:xsl-list-unsubscribe(_at_)lists(_dot_)mulberrytech(_dot_)com>
--~--
--~------------------------------------------------------------------
XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list
To unsubscribe, go to: http://lists.mulberrytech.com/xsl-list/
or e-mail: <mailto:xsl-list-unsubscribe(_at_)lists(_dot_)mulberrytech(_dot_)com>
--~--