Frank Boumphrey <bckman(_at_)ix(_dot_)netcom(_dot_)com> writes:
Basically if I send an XHTML file as text/html, then a browser
will send that file off to its HTML engine, and parse it as an HTML file.
If I send the file as text/xml, then the browser will send it
of to its XML parser, and just create a parse tree.
It seems to me a mistaken assumption that as the XML processor begins
to parse the incoming file that it couldn't *actually read* what is
happening and act accordingly. If the processor is aware of XHTML's
numerous 'cookies', it could process the file correctly. There is no
enforced 'XML blindness' operating here.
The second problem is as Larry points out is the file with
embedded xml from different namespaces. A browser may be quite
happy to accept an XHTML file and will 'know' how to display basic
XHTML1.0, but will not know how to display mathXML or MusicXML.
Because the Namespaces can be scoped in the document, and don't
have to be in the head of the document, the browser may be well
into rendering the document before it realises that it can't
display the relevant code. It would be nice from the browsers
point of view if it had prior knowledge of the namespaces before
downloading the document, because then it could either
This is one of the (pardon my French) stupidities of embedding what
is essentially prolog information deep within an instance. In
traditional SGML systems the benefit of having a prolog is that the
engine can learn all about the document instance *before* it begins
processing. This is simply poor design on the part of XML Namespaces,
probably brought about by design constrains or 'market pressure'.
I maintain that this is something that should be fixed in the
Namespace spec, not hacked in XML. But of course that would be
asking for the moon, since that spec has somehow become a
Recommendation, inviolate and immutable.
Others such as Murray (I hope I am not mis-interpreting him!)
have pointed out that they would like a more general solution,
because they feel that an XHTML mime type will keep XHTML off
in it's own landscape and not encourage it to join the more
general XML solution.
Well, yes. And since there is a strong requirement that a solution be
found for XML, if a stop-gap solution is arrived at for XHTML it will
probably be *different* than XML, which would further push XHTML off
into its own unique landscape. I doubt vendors would want to do both.
What we need instead is a solution that *integrates* XHTML into the
XML environment, both as a 'family' of valid document types and also
as document types and modules mixed into instances via 'namespaces'
(whatever that really means). While the latter is an enormously more
complicated problem than the former, it's obvious we can't all push
it off into the future very far since the world has already gone off
and created many (IMO) broken solutions.
As Rick has pointed out, XHTML is 'application-specific', but no
more so than MathML or any other XML application that requires
specialized processing not provided in a stylesheet. It also happens
to be potentially the most widely used XML markup language, and a
framework for creation of many others. So we all need to work together
to create a solution that works for both XHTML and XML in general.
Murray Altheim, SGML Grease Monkey <mailto:firstname.lastname@example.org>
Member of Technical Staff, Tools Development & Support
Sun Microsystems, 901 San Antonio Rd., UMPK17-102, Palo Alto, CA 94303-4900
An SGML declaration does not an i18n make.