ietf-openproxy
[Top] [All Lists]

Re: OPES Ownership

2001-02-05 14:43:44
Mark,

I think you make some excellent points here. As I read through this, I was wondering if some variant of the P3P pattern might not be applied.

Taking a client-side view:

(a) have a URI identify any intermediate processing that might be performed.

(b) only allow intermediary processing to be applied if the request is accompanied by an appropriate (say) 'allow-processing:' option quoting the corresponding URI.

(c) use a P3P style policy discovery to allow the client to find out what assurances the proxy will give about the nature of processing performed.

(d) (possible extensions to this kind of scheme might allow the client to supply session-key information to allow the proxy to examine encrypted material.)

From the server-side:

(e) Include headers with data indicating what intermediate processing can be performed, OR

(f) allow a proxy to make a metadata request to the origin server to find intermediate processing permissions.

This is all very half-baked... my thoughts here are to provide mechanisms that can ensure that any processing performed is according to the wishes of sender, receiver or both.

#g
--

At 04:29 PM 2/2/01 -0800, Mark Nottingham wrote:
On Fri, Feb 02, 2001 at 02:28:19PM -0800, Maciocco, Christian wrote:
> Mark,
>
> > There is an equivalent model between the client and their access
> > provider, through their terms of service/employment (as the case may
> > be). However, while CDNs are developing mechanisms to allow content
> > providers to control how their objects are served, developing such
> > controls for client->access providers is much more difficult.
> >
> > This is not only because it requires integration into the client --
>
> That's difficult, but this client meta-data could be provided
> through different mean which doesn't require a client
> infrastructure upgrade, e.g. through a login protocol.

I'm thinking in terms of both service activation and de-activation.
For example, let's say that, in an situation where the access
provider provisions services (either on its own or with the consent
of the end user), and they apply a service to a particular resource
when it's inapproriate. What mechanism of control does the client
have over the application of that service?


> > Obviously, there are some legitimate functions that they provide -
> > such as caching (which might be thought of as a form of processing),
> > access control, etc. However, I get nervous when people start talking
> > about changing the messages themselves in-flight, without any
> > permission or control from either the end user or content provider.
>
> The content provider rules/proxylets loaded on the OPES box in the
> network have full knowledge of the message semantic, plus the URIs
> for their domain. The CP's rule/proxylet combination will be the
> ones authorazing in-flight content modification, or denying it.
> Transformation/transcoding of valuable contents stays under the CP
> control. Additionally the CP will be informed of the operation.

Access providers which have a contract with a content provider will,
yes. Frankly, though, most of the service examples that I've seen,
especially those with commercial interest behind them, have nothing
to do with the content provider; they're all about stripping in
services to enhance the access provider's revenue / control over
their users.


> > Ultimately, I'm concerned that the standardization of processing
> > intermediaries in the HTTP may restrict the usefulness of the Web,
> > rather than supplement it; application designers and users doing
> > things that the intermediary service designers didn't forsee will
> > have to deal with crossing an ill-defined boundry between the
> > "normal" web and the "adapted" web.
>
> IMHO I think that processing intermediaries for HTTP or other
> protocols have the potential to enable a wide set of content based
> services, which will be under someone's control, mainly the CP, ISP
> or end-users.

Yes, the potential is fantastic, especially with control from the
content provider. However, see above.

ICAP portrays itself as taking care of all of the protocol details
for the service author; nothing could be further from the truth.
Writing such a service is a huge responsibility; the author must
correctly interpret the context of the request from the client's
point of view, the semantics of the both messages (including request
methods, status codes, content encodings, etc.) and select the
appropriate action to take. There are no guidelines for making these
decisions. Even with input from the content provider, it is not a
trivial problem.

One of the basic tenets of the Web is that a URI points to a resource
under control of its authority, and is opaque; you can't derive
meaning from the extension .jpg, for instance, even though it usually
means that the object is a JPEG image.

Introducing processing intermediaries which aren't under the control
of either the end user or content provider violates this, in that the
content is no longer under control of the authority, and often that
modification is triggered by trying to interpret the semantics of the
URI and/or messages, which is error-prone.

People use the Web and HTTP in particular for a wide variety of
purposes, many of which are impossible to anticipate. How will OPES
services interoperate with WEBDAV messages? Will P3P and RDF
statements about resources be valid once responses are changed by an
OPES intermediary?

If the scope of this group is indeed limited to where a trust
relationship exists with the content provider, there will still be no
way to prevent the mis-application of the technology, conveniently
already incorporated by proxy vendors into their products for
"legitimate" uses.

I don't know where it goes from there. I can see some use in defining
guidelines for service authors, to avoid some of the problems. I
don't know that it's possible to enforce any kind of trust model in
the protocols. My first inclination is to say it shouldn't be
standardized at all; without a trust model, the potential problems
will do more harm than good, especially since there have been
whisperings about OPES intermediaries for all kinds of protocols, not
just HTTP. Retrofitting a processing intermediary model into one
protocol is ambitious enough, IMHO.

I do know that problems of this nature have cropped up before in the
IETF. Any of the Grey Ones care to comment?


--
Mark Nottingham, Research Scientist
Akamai Technologies (San Mateo, CA)

------------
Graham Klyne
(GK(_at_)ACM(_dot_)ORG)


<Prev in Thread] Current Thread [Next in Thread>