I think whether or not a CDN is involved does make a difference in terms of
helping us undertstand the issues involved in how the rule modules and
proxylets are loaded into the OPES box targets in a secure and manageable
When a CDN is used by the content provider, the issue becomes easier
precisely because there is an explict business contract in place -- A
one-to-one trust relationshp between CDN and CP.
However, when a CDN is not in the end-to-end picture, any content provider
would have to deal with MANY access providers to authorize the loading of
rule modules and proxylets. On the other hand, any access provider would
have to deal with MANY content providers. Here come the questions with such
a MANY-to-MANY relationship:
1)Would the access provider allow its OPES box accepting rule modules from
any CP without explicit business arrangment made in advance?
IMHO: No. I think security concern would override anything else -- so
explicit business relationship (i.e. trust) needs to be in place before any
such loading can take place. That means no automatic, on-the-flight loading
of rule-modules and proxylets to the OPES boxes.
2)Isn't it a management/deployment nightmare for a CP to deal with MANY
access providers to take advantage of the OPES services provided by access
IMHO: Yes. So CDN is a better solution from content providers' point of view
to deploy services offered by OPES framework, assuming that CDN itself would
establish some kind of trust relationship with its access providers on a
In summary, the rule modules and proxylets would only flow through a path
with established trust relationship in place first.
From: Mark Nottingham [mailto:mnot(_at_)akamai(_dot_)com]
Sent: Thursday, February 01, 2001 1:05 PM
To: Erickson, Rob
Subject: Re: OPES Ownership
Nice writeup. I've been interested in these issues for a while, just
not sure what direction it should go in.
I tend to want to simplify the list of involved parties down to three
- content provider
- access provider
- end user
Although a CDN may be involved, they have a contractual relationship
with the content provider, and therefore represent the cp's
interests. There is little functional difference between a box
operated by Akamai or Mirror Image and distributed caches or mirrors
operated by the content provider.
It may be more useful to think in terms of "who the service is
provisioned on behalf of" rather than "who deploys the box" -- a CDN
that uses the output of the CDI WG to peer into an ISP's network
still has a contract with the content provider, so that service is
provided on behalf of the content provider. This more generic view
will be more adaptable, IMHO.
In this view, a corporation falls in as an access provider rather
than an end user.
Services provisioned on behalf of the content provider in a CDN are
easy, because they have a contractual relationship. Assuming CDI
works out, it will provide the same for peered networks. In these
cases, a trust model is inherent in the business case.
There is an equivalent model between the client and their access
provider, through their terms of service/employment (as the case may
be). However, while CDNs are developing mechanisms to allow content
providers to control how their objects are served, developing such
controls for client->access providers is much more difficult.
This is not only because it requires integration into the client --
which alone makes it a difficult issue. It also requires insight into
the semantics of messages -- both requests and responses -- to make
reasonable decisions about how to act. Unfortunately, this knowledge
resides at the content provider. URIs are explicitly opaque; current
systems which derive object characteristics from them are extremely
limited, and arguably limit the functionality and extensibility of
Separately, it's very difficult to require an intermediary to
implement whatever trust model we come up with; the HTTP is already
built, and retrofitting an intermediary trust model onto a protocol
which had to have intermediaries retrofitted into it is problematic,
to say the least.
I'm still unsure about what the right thing to do here is. While it's
impossible to stop the deployment of 'processing intermediaries' by
access providers, I'm not thrilled about the encouragement of them.
Obviously, there are some legitimate functions that they provide -
such as caching (which might be thought of as a form of processing),
access control, etc. However, I get nervous when people start talking
about changing the messages themselves in-flight, without any
permission or control from either the end user or content provider.
I've seen some hand-waving about establishment of such mechanisms,
but not much more (pls correct me if I'm missing something).
Ultimately, I'm concerned that the standardization of processing
intermediaries in the HTTP may restrict the usefulness of the Web,
rather than supplement it; application designers and users doing
things that the intermediary service designers didn't forsee will
have to deal with crossing an ill-defined boundry between the
"normal" web and the "adapted" web.
( If that doesn't get some discussion going, what will? ;)
P.S. - you may want to cross-post this to the CDI group to get
discussion of question #5.
Mark Nottingham, Research Scientist
Akamai Technologies (San Mateo, CA)