ietf-openproxy
[Top] [All Lists]

RE: OPES protocol, pre-draft

2003-03-18 12:52:11

Martin, looking at your and other people reaction I agree
that we should keep protocol flexible.

The main change that can give a serious advantage is a well defined
implementation level with fixed "copy always" policy. OPES processor based on
web cache is a very real possibility, and at handshake it may be just a
notification.

I agree that from the protocol point view the same effect may be achieved by
utilizing these generic framework - just always ask/notify about keeping the
copy. But the possibility to implement a device that does not support that
flexibility may very helpful, and such possibility requires notification at
handshake: one should know partner capabilities in advance. This approach may
also benefit L7 type devices - callout server will know about OPES processor
limitations and adopt accordingly.

Oskar

-----Original Message-----
From: owner-ietf-openproxy(_at_)mail(_dot_)imc(_dot_)org
[mailto:owner-ietf-openproxy(_at_)mail(_dot_)imc(_dot_)org]On Behalf Of 
Markus Hofmann
Sent: Tuesday, March 18, 2003 12:32 PM
To: ietf-openproxy(_at_)imc(_dot_)org
Subject: Re: OPES protocol, pre-draft



Oskar,

I think we may have a different implementations in mind. I am
looking at web proxy server (surrogate, I just do not like
this word) extended by OPES capabilities. For this model
storing all incoming data is not a problem - disks are
large and cheap, and storing data is very natural behavior
for the system build around cache engine.

Even in this case you cannot assume to always being able to buffer the
entire object - even the largest caches run out of disk space at some
point and need to do some sort of cache replacement and conserve disk
space. Even more, in case of streaming caches you might end up doing
prefix caching, i.e. not having stored the entire object, but only the
prefix of an object.

The point is that we must not require a specific implementation, but
that our protocol should support various forms of implementations. I
agree that buffering capacity is of less importance in the model
you've described above, but the protocol should also support other
forms, and as such I still see value in allowing the OPES processor to
not buffer the entire object (meaning there seems to be value to
having the [copied] flag.

Correct me if I'm wrong, but it looks like you have
in mind something like layer 7 switch. Such devise may have
better throughput but very limited storage capabilities.
Main differentiator is ability to keep data on disk. Hybrid
devices are also possible, e.g. solid state cache. More interesting
hybrid is L7 based OPES processor combined with cache farm.

Yup, that's one possible implementation form our protocol should support.

I suppose that buffering policy will depend mostly on the device
type. OPES processor with disk will store all intermediate data
and use it's caching capabilities to enhance overall performance.
L7 switch based device will tend to be very conservative on
storage use and may need to exploit protocol capabilities for
copy control.

Yup, and that's why I believe the callout protocol should *not*
require the OPES processor to always store the entire object.

To support all these needs we may do several things:

1. Dynamic (per-message) control, like in current proposal.
2. Stateful protocol with storage policy negotiated at handshake.
3. Different level of protocol implementation with device capabilities
announced (but not negotiated) at handshake.

I thing protocol should support all 3 policies. This may significantly
simplify implementation of cache-based OPES processors.

I agree that the protocol should support different forms of
implementations, but I'm not yet convinced that policies need to be
negotiated "at handshake". With the current proposal, the OPES
processor can dynamically decide what to buffer and what not,
indicating this to the callout server via the [copied] flag. This
seems pretty flexible. Is there a specific scenario that cannot be
solved with that approach?

-Markus



<Prev in Thread] Current Thread [Next in Thread>