There is an equivalent model between the client and their access
provider, through their terms of service/employment (as the case may
be). However, while CDNs are developing mechanisms to allow content
providers to control how their objects are served, developing such
controls for client->access providers is much more difficult.
This is not only because it requires integration into the client --
That's difficult, but this client meta-data could be provided through
different mean which doesn't require a client infrastructure upgrade, e.g.
through a login protocol.
which alone makes it a difficult issue. It also requires insight into
the semantics of messages -- both requests and responses -- to make
reasonable decisions about how to act. Unfortunately, this knowledge
resides at the content provider. URIs are explicitly opaque; current
systems which derive object characteristics from them are extremely
limited, and arguably limit the functionality and extensibility of
Obviously, there are some legitimate functions that they provide -
such as caching (which might be thought of as a form of processing),
access control, etc. However, I get nervous when people start talking
about changing the messages themselves in-flight, without any
permission or control from either the end user or content provider.
The content provider rules/proxylets loaded on the OPES box in the
network have full knowledge of the message semantic, plus the URIs for their
domain. The CP's rule/proxylet combination will be the ones authorazing
in-flight content modification, or denying it. Transformation/transcoding
of valuable contents stays under the CP control. Additionally the CP will
be informed of the operation.
Ultimately, I'm concerned that the standardization of processing
intermediaries in the HTTP may restrict the usefulness of the Web,
rather than supplement it; application designers and users doing
things that the intermediary service designers didn't forsee will
have to deal with crossing an ill-defined boundry between the
"normal" web and the "adapted" web.
IMHO I think that processing intermediaries for HTTP or other protocols
have the potential to enable a wide set of content based services, which
be under someone's control, mainly the CP, ISP or end-users.