I like your approach, it does make a lot of sense. However, it brings up
a point which I'd not seen before: Every callout server must be able to
act as a client to other callout servers too. So an implementor cannot
just build the server part. What the current draft implies is that every
single vendor has to include a client in order to comply with the spec.
If he does not there is currently no alternative to add more services.
Question is, how realistic is it to assume that every vendor will
implement the client piece. Wouldn't it be easier to require that the
intermediary supports pipelining requests, even if this introduces more
I see the benefits "chaining" callout servers, and handling response
caching hop-by-hop, however I doubt it's realistic to assume this will
work. Most likely a vendor will implement what is needed to run his
application and won't care about the ability to talk to other callout
servers (until, of course, there is enough demand from customers).
What do you think?
From: Mark Nottingham [mailto:mnot(_at_)mnot(_dot_)net]
Sent: Wednesday, 20. March 2002 18:55
To: Frank Berzau
Cc: 'OPES list'; 'Markus Hofmann'
Subject: Re: comments on draft-dracinschi-opes-callout-requirements-00
I would imagine that caching must be possible at each stage of the
processing chain, so that you can compose things like
svc1 -> cache -> svc2 -> cache
svc1-> svc2 -> cache
depending on the particular situation.
This makes the cache effectively a local service itself,
rather than an
assumed part of the OPES intermediary. IMO that approach is more
flexible and more clearly defined. It would require that a
caching model be specified, but would not limit us to one
The requirements below lead to the need to compose a cache key based
upon a fairly large variety of input. If the approach above
is taken, it
would best be communicated to the cache as a parameter. This,
raises the need for out-parameters from callouts. E.g.,
service chain: svc1 -> cache
in parameters: foo=boo cachekey=abc,def
out parameters: cachekey=abc,def
All that would be needed, then, would be a way to hook up the
out-parameters from svc1 to the in-parameters from svc2 in
(probably with some sort of variable, although an approach using URIs
and/or XML IDREFs might be possible).
On Wednesday, March 20, 2002, at 02:37 AM, Frank Berzau wrote:
Responses could be cacheable for just a subset of requests. Example
could be a URL filtering service that returns different
specific groups of users. This becomes an important aspect
at section 3.2.6. If a callout server aggregates multiple
must not only use the earlist expiration, it must also
ensure the most
specific caching rule is applied. In some cases there may
where such diverse cacheability rules cannot be contaminated, e.g.
server 1 does language translation and it flags all
per language (as found in the user-agent header). Service 2 does URL
filtering and flags responses be cacheable per LDAP user group. How
should the callout server aggregate this?
[mailto:owner-ietf-openproxy(_at_)mail(_dot_)imc(_dot_)org] On Behalf Of
Sent: Friday, 01. March 2002 21:16
To: Mark Nottingham
Cc: OPES list
Subject: Re: comments on
Mark Nottingham wrote:
Perhaps it would be helpful to clarify the interactions between a)
the encapsulated protocol b) the OPES service and c) the callout
protocol. Caching touches all of these in different ways, and that
might be causing the confusion here.
That's a good idea, because there are quite some
the various cachability rules (e.g. the possibly modified response
from a callout server must NOT be cached longer than
indicated in the
original response from the origin server etc.). We
struggeled quite a
bit ourselves when discussing this, so we need some clear