ietf-822
[Top] [All Lists]

Re: The <cid: ...> URL - who implements it?

2001-02-07 05:05:08
In <p05010411b6a578bf5af0(_at_)[130(_dot_)237(_dot_)150(_dot_)141]> Jacob Palme 
<jpalme(_at_)dsv(_dot_)su(_dot_)se> writes:


At 11.55 +0000 01-02-05, Charles Lindsey wrote:

So this still leave the question of how to send a digest (or other
multipart) in either mail or news and to include with it some text
explaining what it is about and including pointers to the individual
digest items (whether in the form of a table of contents, or otherwise).

That sounds like a useful thing to do, and it would seem that URLs
identifying the items by their Content-ID is the proper way to do it, and
the <cid: ...> URL seems to be the proper one for the job.

Actually, I have now read RFC 2110, and I see that Content-Location would
be a more suitable tool than Content-ID. But otherwise my argument stands.

What you are doing is inventing a new mark-up language for
mail and news, in addition to the existing richtext and HTML
mark-up languages.

Yes, a slightly extended text/plain. There is precedent for that in
Gelens' format=flowed parameter to text/plain (I forget the RFC number).

It seems silly to invent a new rich text format just because you
dislike HTML. Would it not be better, instead, to analyze the reasons
you dislike HTML, and specify a subset of HTML including restrictions
on where to use it, which satisfies your needs but avoids what you
dislike in HTML. If such a subset of HTML could be defined and avoids
your problems with HTML, it would allow reading software to use the
same code for interpreting full HTML and the subset you allow.
So only the producing software need to be rewritten to support
the new format.

But that is no solution. There would be no way to enforce adherence to
such a subset, and you grossly under-estimate the pathological hatred of
HTML when people try to use it on Usenet.

If, for example, you dislike HTML because it is dificult to read
for those who read it as plain text, it might be possible to
define a way of using HTML to avoid this problem.

An example of such human-readable HTML coding is shown below:

And your example already introduces more extraneous material that many
Usenet readers would be willing to tolerate.

What I am after is the regularisation of the present practice whereby
software attempts to recognise URLs when it finds them in the midst of
text/plain (usually by recognising anything starting with "www." or
enclosed in <...>).

One can visualise a situation where, with a suitable "format=url"
parameter to text/plain, one could say that conforming software SHOULD
recognise anything of the form "<xxx:.....>" as a URL, and MAY attempt to
recognise other cases. One might provide restrictions on what "xxx" could
be ("url" and "uri" would be the minimal candidates, obviously).

-- 
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Email:     chl(_at_)clw(_dot_)cs(_dot_)man(_dot_)ac(_dot_)uk  Web:   
http://www.cs.man.ac.uk/~chl
Voice/Fax: +44 161 436 6131      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9     Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5