Most C/C++/FORTRAN optimizations do not change computational
complexity due to nature of these languages....
While it is common with XSLT.
A better analogy is SQL. Many SQL optimizations do change computational
complexity, often dramatically. And yes, this does mean that programmers
looking at performance have difficulties because it's not obvious what's
happening underneath the bonnet. Implementors have tackled this by
providing tools and utilities that reveal what's going on and allow you
to tune it.
I do remember that when SQL first came out we had great trouble because
we were used to sizing the workload on a mainframe to an accuracy of
+/-5%, and this just wasn't possible any more. We did find that we had
to switch from analytic methods to a more empirical approach.
What I am asking for is for rules I can follow to benefit
from techniques already available in good implementations;
why should this knowledge be obtained empirically or through
reverse-engineering? I am good at reading others' source code
-- I've learned a lot.
Is it normal that to use XSLT with consistent results one
should read source code in Java or C?
Inevitably, I think that implementors tend to put optimization at a
higher priority than instrumentation. But you are right, users also need
instrumentation, and sometimes they need it more than they need
optimization.
Michael Kay
XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list