ietf-openproxy
[Top] [All Lists]

RE : RE : RE : draft-ietf-opes-ocp-core

2003-08-28 05:09:42

Alex,
I think I wasn't clear in previous mail. Please see response below, I hope it 
will better explain my need.

-----Message d'origine-----
De : Alex Rousskov [mailto:rousskov(_at_)measurement-factory(_dot_)com] 
Envoyé : mardi 26 août 2003 19:31
À : MITTIG Karel FTRD/DMI/CAE
Cc : ietf-openproxy(_at_)imc(_dot_)org
Objet : Re: RE : RE : draft-ietf-opes-ocp-core



On Tue, 26 Aug 2003, MITTIG Karel FTRD/DMI/CAE wrote:

Altering the cache control header will work, but it could 
lead to side 
effects.

If you take the case when service's clients are subnetworks or 
enterprises with proxies on it, they wont be able to cache the 
response any more. In the same way, client application can 
respect too 
closely zero TTL and re-send request very frequently. 
Result will then 
be an increased load on your processor, and the benefits of caching 
unfiltered content will be loss.

My interpretation of the above is "application cache controls 
may not be able to express what a particular OPES system may 
want to express". For example, HTTP does not have enough 
knobs to control caching at intermediaries separately from 
caching by end clients while your OPES system wants to do 
just that. Is this interpretation correct?


Not exactly: my need is to be able to cache the OCP treatment applied, which 
should be independent of the application protocol and application caching 
information. See example below.


Can you provide a more specific example that illustrates 
inability of, say, HTTP Cache-Control modifications to 
achieve your goals?



Take an HTTP filtering service offered to 2 communities (for example high and 
low schools using a filtering service), each passing through the OCP processor. 
The aim of the service is to filter Internet access but with a different level 
for each community. Internet content can then be divided in 3 parts:
        - the content allowed&denied only for community 1 (say [E1])
        - the content allowed&denied only for community 2 (say [E2])
        - the content allowed&denied for communities 1 and 2 (say [I])

There are 2 ways to treat the problem:
        - The "simplest" one is to say that the service is different, so you 
provide 2 OCP processors calling 2 services (or the same service with different 
parameters). In this case, caching is not a problem, but this solution becomes 
costly because you double the equipments.
        - The second one is to say that the service uses a policy to know which 
treatment to apply depending of the client. In this case, there will only be 
one processor and one service, which is far more interesting. 

Normally, saying the processor can do HTTP caching, you will need to call the 
service after the caching process ("response post-cache" vectoring point) to 
avoid the cache storing a modified version which should only be send to 
community 1 or 2. You don't have in this case to modify cache control of 
responses, so it works fine.

The problem with this solution is that you will query your service for each 
incoming request (or indeed each outgoing response). If there are proxies in 
one community, you will then save corresponding cached responses, but it will 
be really hard to predict the gain.

If you want to make some optimization, you can see that [I] content could be 
cached by the processor. Or, this part can represent a lot of queries, saying 
x%, so allowing the processor to cache corresponding modified response will 
save x% load on your service.

But now, if you put your service before the caching process of your processor, 
the processor won't be able to distinguish between [E1]&[E2], even if it is 
able to store the 2 versions of responses, because it doesn't (and doesn't have 
to) know service policy. So the service has to tell processor not to cache this 
part. One way is to do this as you suggested by modifying the response using 
protocol parameters to say it is not cacheable.

The drawback in this case is that it has an impact on clients applications. If 
one community uses proxies, they wont be able to cache the responses any more. 
You will then increase the service load related to these responses to an 
unknown level (depending of the original TTL and the treatment required).

So the only way I see to be sure to gain those x% (which could be for some 
services around 80%) is to add a simple "is-cacheable" flag in OCP messages 
(without needing extended controls like application protocols provide).

Karel


But you're right, the problem is specific to some application 
protocols (and also some cases), so it can be address in 
application 
bindings.

Agreed. This becomes are to-do item for HTTP OPES binding then.

Alex.




<Prev in Thread] Current Thread [Next in Thread>