ietf-openproxy
[Top] [All Lists]

RE: OPES Ownership

2001-02-05 10:09:51
-----Original Message-----
From: Mark Nottingham [mailto:mnot(_at_)akamai(_dot_)com]
Sent: Friday, February 02, 2001 4:29 PM
To: Maciocco, Christian
Cc: ietf-openproxy(_at_)imc(_dot_)org
Subject: Re: OPES Ownership


On Fri, Feb 02, 2001 at 02:28:19PM -0800, Maciocco, Christian wrote:
Mark, 

There is an equivalent model between the client and their access
provider, through their terms of service/employment (as 
the case may
be). However, while CDNs are developing mechanisms to 
allow content
providers to control how their objects are served, developing such
controls for client->access providers is much more difficult.

This is not only because it requires integration into the 
client --

That's difficult, but this client meta-data could be provided
through different mean which doesn't require a client
infrastructure upgrade, e.g. through a login protocol.

I'm thinking in terms of both service activation and de-activation.
For example, let's say that, in an situation where the access
provider provisions services (either on its own or with the consent
of the end user), and they apply a service to a particular resource
when it's inapproriate. What mechanism of control does the client
have over the application of that service? 

Initially clients based service request which will be provisioned by 
their ISP/CDN providers, and extend accross the "usual" session lifetime
until deactivated by the end-users. Long term, if OPES is successful 
the service request/characteristic could be session based but this would
requires client changes.
 

Obviously, there are some legitimate functions that they provide -
such as caching (which might be thought of as a form of 
processing),
access control, etc. However, I get nervous when people 
start talking
about changing the messages themselves in-flight, without any
permission or control from either the end user or content 
provider.

The content provider rules/proxylets loaded on the OPES box in the
network have full knowledge of the message semantic, plus the URIs
for their domain. The CP's rule/proxylet combination will be the
ones authorazing in-flight content modification, or denying it.
Transformation/transcoding of valuable contents stays under the CP
control. Additionally the CP will be informed of the operation.

Access providers which have a contract with a content provider will,
yes. Frankly, though, most of the service examples that I've seen,
especially those with commercial interest behind them, have nothing
to do with the content provider; they're all about stripping in
services to enhance the access provider's revenue / control over
their users.

I view content falling in different buckets. One bucket is the one we see
today and I agree the content provider is often out of the loop.
Other bucket with the (slow) arrival of broadband we see content providers 
who really do care about their content, look and feel of their content 
on the receiving devices, and where this content goes. To enable valuable
content delivery these providers will have to be part of some deal
somewhere.

Ultimately, I'm concerned that the standardization of processing
intermediaries in the HTTP may restrict the usefulness of the Web,
rather than supplement it; application designers and users doing
things that the intermediary service designers didn't forsee will
have to deal with crossing an ill-defined boundry between the
"normal" web and the "adapted" web.

IMHO I think that processing intermediaries for HTTP or other
protocols have the potential to enable a wide set of content based
services, which will be under someone's control, mainly the CP, ISP
or end-users.

Yes, the potential is fantastic, especially with control from the
content provider. However, see above. 

ICAP portrays itself as taking care of all of the protocol details
for the service author; nothing could be further from the truth.
Writing such a service is a huge responsibility; the author must
correctly interpret the context of the request from the client's
point of view, the semantics of the both messages (including request
methods, status codes, content encodings, etc.) and select the
appropriate action to take. There are no guidelines for making these
decisions. Even with input from the content provider, it is not a
trivial problem.

One of the basic tenets of the Web is that a URI points to a resource
under control of its authority, and is opaque; you can't derive
meaning from the extension .jpg, for instance, even though it usually
means that the object is a JPEG image. 

Introducing processing intermediaries which aren't under the control
of either the end user or content provider violates this, in that the
content is no longer under control of the authority, and often that
modification is triggered by trying to interpret the semantics of the
URI and/or messages, which is error-prone. 

Proxylets are installed on the OPES box at the request of someone, e.g. 
content provider, and will perform known processing/transformation to the 
proxylet owner. Someone must make sure these services are doing only what
they're allowed to do (TBD), e.g. restrict actions to their domain. May be
there will be a set of compliance/conformance services tools.


People use the Web and HTTP in particular for a wide variety of
purposes, many of which are impossible to anticipate. How will OPES
services interoperate with WEBDAV messages? Will P3P and RDF
statements about resources be valid once responses are changed by an
OPES intermediary?

If the scope of this group is indeed limited to where a trust
relationship exists with the content provider, there will still be no
way to prevent the mis-application of the technology, conveniently
already incorporated by proxy vendors into their products for
"legitimate" uses.

I don't know where it goes from there. I can see some use in defining
guidelines for service authors, to avoid some of the problems. I
don't know that it's possible to enforce any kind of trust model in
the protocols. My first inclination is to say it shouldn't be
standardized at all; without a trust model, the potential problems
will do more harm than good, especially since there have been
whisperings about OPES intermediaries for all kinds of protocols, not
just HTTP. Retrofitting a processing intermediary model into one
protocol is ambitious enough, IMHO.

I do know that problems of this nature have cropped up before in the
IETF. Any of the Grey Ones care to comment?


-- 
Mark Nottingham, Research Scientist
Akamai Technologies (San Mateo, CA)



<Prev in Thread] Current Thread [Next in Thread>