ietf-openproxy
[Top] [All Lists]

Re: OPES Ownership

2001-02-02 17:41:38
1. A model of the client/access provider relationship is that the
client subscribes to content services and the access provider
exercises them.  There could be a request-by-request mechanism
for the client to specify which services should or shouldn't
be applied to a request, by embedding proxylet content in
his request.  Well, that's cute, I hadn't been able to think of
a reason for request proxylets before.  Getting browsers to
support this is a header of a different color, though.

2. There are business models that induce the content-access-user
chain to operate on behalf of the user, despite the rapacious
practices that we've become accustomed to seeing.  I can't
say that these models will dominate the market, but we see
that it's difficult to predict what tomorrow's web pages will
bring.

3. What's going on here is that the "authority" is getting a
finer grained definition of content and the control over
it.  This gives more opportunities for doing business.

I think the model brings opportunities for correct control
of content into what is becoming an increasingly complicated
distributed system.  A richer model for control  is important for scalability.

Hilarie

Mark Nottingham <mnot(_at_)akamai(_dot_)com> 02/02/01 05:29PM >>>
On Fri, Feb 02, 2001 at 02:28:19PM -0800, Maciocco, Christian wrote:
Mark, 

There is an equivalent model between the client and their access
provider, through their terms of service/employment (as the case may
be). However, while CDNs are developing mechanisms to allow content
providers to control how their objects are served, developing such
controls for client->access providers is much more difficult.

This is not only because it requires integration into the client --

That's difficult, but this client meta-data could be provided
through different mean which doesn't require a client
infrastructure upgrade, e.g. through a login protocol.

I'm thinking in terms of both service activation and de-activation.
For example, let's say that, in an situation where the access
provider provisions services (either on its own or with the consent
of the end user), and they apply a service to a particular resource
when it's inapproriate. What mechanism of control does the client
have over the application of that service? 


Obviously, there are some legitimate functions that they provide -
such as caching (which might be thought of as a form of processing),
access control, etc. However, I get nervous when people start talking
about changing the messages themselves in-flight, without any
permission or control from either the end user or content provider.

The content provider rules/proxylets loaded on the OPES box in the
network have full knowledge of the message semantic, plus the URIs
for their domain. The CP's rule/proxylet combination will be the
ones authorazing in-flight content modification, or denying it.
Transformation/transcoding of valuable contents stays under the CP
control. Additionally the CP will be informed of the operation.

Access providers which have a contract with a content provider will,
yes. Frankly, though, most of the service examples that I've seen,
especially those with commercial interest behind them, have nothing
to do with the content provider; they're all about stripping in
services to enhance the access provider's revenue / control over
their users.


Ultimately, I'm concerned that the standardization of processing
intermediaries in the HTTP may restrict the usefulness of the Web,
rather than supplement it; application designers and users doing
things that the intermediary service designers didn't forsee will
have to deal with crossing an ill-defined boundry between the
"normal" web and the "adapted" web.

IMHO I think that processing intermediaries for HTTP or other
protocols have the potential to enable a wide set of content based
services, which will be under someone's control, mainly the CP, ISP
or end-users.

Yes, the potential is fantastic, especially with control from the
content provider. However, see above. 

ICAP portrays itself as taking care of all of the protocol details
for the service author; nothing could be further from the truth.
Writing such a service is a huge responsibility; the author must
correctly interpret the context of the request from the client's
point of view, the semantics of the both messages (including request
methods, status codes, content encodings, etc.) and select the
appropriate action to take. There are no guidelines for making these
decisions. Even with input from the content provider, it is not a
trivial problem.

One of the basic tenets of the Web is that a URI points to a resource
under control of its authority, and is opaque; you can't derive
meaning from the extension .jpg, for instance, even though it usually
means that the object is a JPEG image. 

Introducing processing intermediaries which aren't under the control
of either the end user or content provider violates this, in that the
content is no longer under control of the authority, and often that
modification is triggered by trying to interpret the semantics of the
URI and/or messages, which is error-prone. 

People use the Web and HTTP in particular for a wide variety of
purposes, many of which are impossible to anticipate. How will OPES
services interoperate with WEBDAV messages? Will P3P and RDF
statements about resources be valid once responses are changed by an
OPES intermediary?

If the scope of this group is indeed limited to where a trust
relationship exists with the content provider, there will still be no
way to prevent the mis-application of the technology, conveniently
already incorporated by proxy vendors into their products for
"legitimate" uses.

I don't know where it goes from there. I can see some use in defining
guidelines for service authors, to avoid some of the problems. I
don't know that it's possible to enforce any kind of trust model in
the protocols. My first inclination is to say it shouldn't be
standardized at all; without a trust model, the potential problems
will do more harm than good, especially since there have been
whisperings about OPES intermediaries for all kinds of protocols, not
just HTTP. Retrofitting a processing intermediary model into one
protocol is ambitious enough, IMHO.

I do know that problems of this nature have cropped up before in the
IETF. Any of the Grey Ones care to comment?


-- 
Mark Nottingham, Research Scientist
Akamai Technologies (San Mateo, CA)

<Prev in Thread] Current Thread [Next in Thread>