ietf
[Top] [All Lists]

Re: New schemes vs recycling "http:" (Re: Past LC comments on draft-ietf-geopriv-http-location-delivery-08)

2008-08-07 11:39:32
Tim Bray wrote:
On Thu, Aug 7, 2008 at 10:23 AM, Keith Moore
<moore(_at_)network-heretics(_dot_)com> wrote:

The TAG is in fact clearly correct when they state that
introduction of new URI schemes is quite expensive.
To me it seems that this depends on the extent to which those new
URI schemes are to be used in contexts where existing URI schemes
are used.  New URI schemes used in new contexts or applications are
not overly burdensome.

Right, but there's a contradiction lurking here. You probably wouldn't bother to use URI syntax unless you expected fairly wide utilization, or to benefit from the plethora of existing URI-parsing and -resolving software.

Disagree.  Neither wide utilization, nor ability to reuse existing
software were ever necessary to make URIs useful. A compact notation for naming resources (and in some cases, suggesting how those resources can be accessed) is useful in its own right to almost any networked
application, even if it has to implement its own URI parsing routines,
and even if it doesn't utilize the HTTP infrastructure at all.

Also, URIs have mindshare, the value of which should not be
underestimated.  People all over the world are used to dealing with them
and - at some level - understand their limitations.

The notion of wanting to use URI syntax but simultaneously requiring
a new scheme is often a symptom of fuzzy thinking.

If URIs were a good idea in the context of the web, it's hardly
surprising that they might be a good idea in the context of other networked applications.

And in the specific case of XRI, which seems designed as an extremely
 general-purpose thing, the cost is clearly very high, so the
benefits need to be compelling.

I haven't followed XRI enough to have an opinion about it.

It should also be recognized that overloading URI schemes (as well
as overloading HTTP) is also expensive, though in a different way.
The consequence of overloading is that functionality is reduced and
 interoperability suffers.

Got an example?  I'm having trouble thinking of any problems I've run
across that could be ascribed to this. -Tim

The most obvious example that comes to mind is that protocols tend to evolve separately from one another, even when derived from a common ancestor - and the scope/applicability of each protocol is likely to change over time. Whenever any protocol (including HTTP) is adapted for some application that is sufficiently removed from its "normal" use, and especially if that new application itself attracts a lot of users or implementations, there is a tendency to "tweak" that original protocol to better align it with the new application. For instance, we can see how this has happened with email message headers when they've been adapted to NetNews and HTTP requests and responses. (It also happened when RFC 822 was adapted for use on BITNET and UUCP networks). If the protocol being adapted is HTTP, and HTTP URIs are used to name resources in the new application, interoperability of HTTP URIs will suffer.

Keith
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>