I think that a lot of the objections made against XML/HTML vs nroff are
ultimately due the fact that adding end elements as well as start
elements makes for twice the work.
One way around this is to use better editing tools. I remain
consistently disappointed by the XML editing tools I have tried.
Another approach I have been experimenting with is to drop in an
alternative lexical analyzer into an existing XML parser. Instead of the
'strict' format required by the specification the lexer has a bunch of
intelligent rules to make the process of editing less tedious.
For example the feature I like of Wikipedia markup is that paragraph
breaks are automatically infered from a blank separating line (i.e. look
for nl ws* nl).
Other obvious changes are to get rid of namespace prexixes unless there
is actual ambiguity, to automatically infer end tags at paragraph breaks
and to allow /> as a means of closing the current lexical context. In
the rare case that block structure is actually needed beyond this
explicit blocking can be used.
The feature of wikipedia markup that I do not like is the fact that the
markup soon becomes unweildy once you go beyond the most commonly used
features. I don't know many people who can use the wiki table markup for
example.
I am currently experimenting with a markup where elements and attributes
have the same consistent syntax <p color=red> becomes <p <color red>>
In short we end up more or less back to S-expressions with angle
brackets instead of round ones.
The main difference is that in document structure it is really not
necessary to throw in all the close tags, they only distract. If
something can be infered then infer it.
-----Original Message-----
From: ietf-bounces(_at_)ietf(_dot_)org
[mailto:ietf-bounces(_at_)ietf(_dot_)org] On
Behalf Of Dave Crocker
Sent: Friday, January 13, 2006 12:44 AM
To: Bill Fenner
Cc: ietf(_at_)ietf(_dot_)org; paul(_dot_)hoffman(_at_)vpnc(_dot_)org
Subject: Re: Alternative formats for IDs
I don't think that converting to xml is the same class of work.
There's a great deal of semantic information that should be
encoded in
the XML that isn't in the submitted text and doesn't have
to be in the
nroff.
Strictly speaking, you are certainly right.
But I lived with nroff for quite a few years and I have had
to do quite a few txt-2-xml2rfc conversions recently. The
difference in semantic encoding, that you cite, is offset by
how easily nroff formatting errors can be made and not
readily detected.
Mostly, this sort of conversion work has a small, relatively
standardized "vocabulary" of text to add or change and one
gets into a rhythm. From that perspective, I suspect the work
is about the same. The real difference is that debugging the
xml2rfc conversion is probably MUCH easier.
d/
--
Dave Crocker
Brandenburg InternetWorking
<http://bbiw.net>
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf