xsl-list
[Top] [All Lists]

RE: Omnimark vs. XSL (Saxon) Challenge

2004-03-17 03:41:45
On 16.03.2004 (23:43 Uhr +0000), Michael Kay wrote:

Like others, I don't feel I have enough information about the problem to
offer you a solution, but I can attempt a critique of your code.

Which is very welcome!

I am going to incorporate all of your suggestions, and am commenting only on 
stuff that requires information from me.

If you prefer XSLT 1.0, try:

Currently I prefer XSLT 1.1 using Saxon 6.5.3  :-)

<xsl:variable name="empty-cells">
      <xsl:variable name="random-nodes-r" select="document('')//*" />
      <xsl:variable name="random-nodes-c" select="document('')//*" />
      <xsl:for-each select="$random-nodes-r[position() &lt;= $rows]">
              <xsl:variable name="row-num" select="position()" />
              <xsl:for-each select="$random-nodes-c[position() &lt;=
$cols]">
                      <xsl:variable name="col-num" select="position()" />
                      <cell row="{$row-num}" col="{$col-num}" />
              </xsl:for-each>
      </xsl:for-each>
</xsl:variable><!-- empty-cells -->

This is using the "1 to n" iteration trick. Using the stylesheet as a place
to get the nodes for this seems a bad choice, document('') involves an extra
parse, though it will only be done once. I would do this one by recursion.
(Again, it's much easier in 2.0). But then I would want to take a careful
look at your algorithm to see if it really has to be done this way.

I thought if I already know the number of times, having a for-each loop might 
be more efficient than a recursion. But now I remember your note about possible 
very efficient implementations of "tail recursion". I guess Saxon does this 
excellent?

How long does it take, and how does the elapsed time vary as the table size
increases? Plotting that can often deepen your understanding of what you are
asking the XSLT processor to do.

64 rows: 2 secs
96 rows: 5 secs
128 rows: 10 secs
160 rows: 17 secs
192 rows: 30 secs
224 rows: 62 secs

That is exponential... and you are right, all changes until now did not have 
any impact on processing time.

My next steps will be:

* Do not carry all content data in the normalized table structure, just keep 
the necessary attributes. The structure of the input table will not change, so 
it can easily be handled.
* Do colspan processing in a first step and not as a recursion after the 
rowspan recursion.
* Look at tail recursion.
* Also, processing can stop as soon as I have width attributes for all columns.

I'll keep everyone updated, hopefully with a more elegant and compact example 
(and solution).

- Michael
-- 
_____________________________________________________________
Dipl.-Ing. Michael Müller-Hillebrand
                                     
"Mehr Effizienz für Wissensarbeiter" --> http://cap-studio.de

 XSL-List info and archive:  http://www.mulberrytech.com/xsl/xsl-list