Alex, thank you for your reply and the pointer. I agree with you that
the use case I am interested in is not explicitly documented in the opes
material. This gives the wrong impression, since all the examples show
the data flows returning to the first hop data dispatcher. To restrict
data flow in this way would be inefficient for high capacity
applications and would tend to make the first hop a bottleneck.
To be a little more specific on my use case and to make sure I
understand your response let me add more detail. Fundamentally, I wish
to have an "open" IETF network environment. The second requirement is
speed in adaptation processing, so I am thinking that callout will be
faster than using a proxy (maybe this is an incorrect assumption). I may
be a little confused about the distinction between classic proxy and
classic call out in this situation (this word proxy isn't even mentioned
in the opes architecture document as a consideration). Also, I have a
requirement to adapt a large volume of content for numerous devices.
I expect that the volume of data will require a number of adaptation
(call out) processors. I estimate 10 of them and therefore I would need
load balancing as part of the solution for directing data to the call
out servers (doing the adaptation). I am looking at Figure 3 in the opes
architecture draft and considering a load balanced collection of ten
call out servers. As I had mentioned previously, I'd like to have
parallel pipeline approach where the adapted data from each of the opes
call out server is simply forwarded on, directly to the producer or data
consumer (what ever the case would be). In addition each of the opes
call out servers would then send billing and trace data back to the load
balancer (or another network location). Is it realistic to expect that
this could this be accomplished within the opes framework? In fact a
more fundamental question is the opes framework the best way to solve
this problem and maintain an open system. I think this is what opes is
all about.
Thanks for your interest in this use case problem and the discussion.
Regards John
Alex Rousskov wrote:
John,
As far as I can see, your use case is 100% within the OPES
scope. However, you are probably thinking at the protocol level and,
hence, considering OCP protocol and not just OPES in general. If that
is the case, please read on.
Your use case is a case of two application proxies using OCP
between each other: the data is pipelined via OCP from one application
hop to another, possibly with some other data returning to the first
hop. This use case has been discussed, but I do not think it was
explicitly documented.
OCP protocol can be used to do what you want. You will need to
document and negotiate an OCP Core profile that specifies the details,
including sending trace data and billing information back to the first
proxy as an option. While doing that, you will need to be careful not
to be confused by terminology that targets classic "callout" rather
than classic "proxying" use cases.
OCP protocol allows you to "pipeline" application messages
very efficiently. It has a few features targeted at classic "callout"
use cases, such as data preservation optimization. You will not
need/use those.
You can take OCP Core and document an application profile that
changes OCP focus from callout to proxying. I cannot say whether OCP
is the best protocol for your needs since I do not know the details of
your environment. It may be a good candidate. Another option would be
to use the native/original application protocol in one direction and
return tracing/billing data via some other means.
Please also see the following related message:
http://www.imc.org/ietf-openproxy/mail-archive/msg02830.html
It would be great if you can contribute your use case (and
your OCP profile, if any) to OPES Framework. Please keep us posted on
your progress and do not hesitate to discuss any related issues on
this list.
Thanks,
Alex.
On Wed, 7 Jan 2004, John G. Waclawsky wrote:
I have been looking over the opes drafts and trying to understand
the efficiency of a distributed opes framework with a large volume
of activity (and also trying to gauge how useful it would be for
time dependent service execution activities). Consider an opes
service where application data is transformed (or adapted) in some
way. My question is, If I send the data flow to a callout server to
perform some opes service task (or a string of call out servers),
does the data always have to return to the data dispatcher at the
opes data processor that directed the flow to the call out server in
the first place (an ancillary question is: can just trace data and
billing information only be sent back)?. Basically, will an opes
framework allow the use of a piplining approach where once the
service is complete at the call out server, the adapted application
data can be forwarded directly to the data producer or data consumer
(what ever the case may be)? Any information, advice or council
would be greatly appreciated. Thanks. Regards John