Hi Dimitre,
Sorry for being unclear, I indeed found the concept a bit hard to
explain, let me try it easier and in a different wording:
IN SHORT:
Take any input file, filter out several data (filter on filter),
tokenize it, serialize the result as XML.
LONGER:
1. Take an input file, containing some structured data, example CSV (one
line):
Field 1, "quoted field", /* comments, ignored */ "field with ""quotes""
in it", unquoted field // end of line comment
2. By applying a chain of filters, this (and many other) structured
formats can be changed into a node set. In this example:
a) replace all comments with nothing
b) replace double quotes with special char DQUOT
c) replace comma's between quotes with special char COMMA
d) remove all quotes
e) tokenize the string by normal comma
f) on serialization, replace special chars DQUOT and COMMA with their
normal counterparts
3. The order of a-f is very important. It is defined in an xsl:variable
like in the original example.
If you take this to a higher level, you got a very powerful
structured-text-to-xml extractor. Which is what I am after.
You say that you can define the order of execution, can you shed some
more light on how to do so?
Cheers,
Abel
Dimitre Novatchev wrote:
Hi Abel,
Could you, please, explain the problem you're trying to solve?
It is not very clear from the original post.
And yes, it is possible to serialize "computations" in XSLT to every
extent desirable.
I have an unpublished work dated 2003 (actually just an
implementation) on implementing Monads and I even demoed it to Jeni at
XML Europe 2003.
A simpler example is the "XSLT Calculator"
--~------------------------------------------------------------------
XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list
To unsubscribe, go to: http://lists.mulberrytech.com/xsl-list/
or e-mail: <mailto:xsl-list-unsubscribe(_at_)lists(_dot_)mulberrytech(_dot_)com>
--~--