And since we are talking about it, some additional thoughts:
My point of view is that a traditional pipeline is not sufficient.
Specifically,
I think that a static pipeline with a number of processing steps is not
adequate for some cases. The processing order should probably depend on
the document structure as well.
E.g. if we have a tranformation that handled an <import> element (with
functionality similar to XML Inclusions) and we have other tranformations
that might generate additional <import> elements. In that case a simple
static pipeline is not sufficient.
E.g. think of a document
<document>
<someData>Data1</someData>
<import ref="url1"/>
</document>
where "url1" is an XML document with the following information
<someData>Data2</someData>
and if there is a different transformation that handles the <someData> tags
which also generates more import statements (lets say the the first
someData
tag will be transformed to an <import ref="url2"/>.
If we use the following (which is the reasonable way to do it, methinks)
Import transformation
SomeData transformation
the final result will be something in terms of:
<document>
<import ref="url2"/>
<data2FinalRepresentation/>
</document>
which is not what we want..
If we have the data handler first, the result will still not be fully
transformed
(the same as before actually):
<document>
<import ref="url2"/>
<data2FinalRepresentation/>
</document>
However, if the pipeline was autmatically constructed according to the
document structure,
the import transformation could be applied sufficient times to get rid
of all the "import" elements.
I think that the only problem with such an approach is that it is
extremely slow, since the whole
document structure has to be investigated in order to construct the
proper tranformation pipeline.
Mike (sharing his thoughts, which might be quite crap as well :)
P.S. Sorry for the crapy XML in this post!
XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list