[Top] [All Lists]

RE: Comments on ocp-00

2003-04-03 10:44:05

Here is a short summary of the pending issues on this thread.

        a) OPES processor may be able to pre-process application
          messages (e.g., extract payload). Callout servers
          may be able to handle various kinds of application
          data (e.g., complete HTTP messages versus MIME-encoded
          payload). Thus, somebody needs to tell OPES processor
          and callout server what is an "application message"
          definition that they should both use during OCP
          communications. Should OCP support auto-negotiation or
          rely on out-of-band (e.g., manual) configuration?

        b) Do we really need a special "error" flag to say
          "rally bad error; you should probably wipe out all
           related state". Or can we assign the same side-effect
           to some of the result codes?

        c) OPES processor can be [a part of] an application-level
          proxy. Can OPES processor be a transport-level gateway
          too? For example, can OPES processor manipulate with
          raw TCP packets and care about things like
          retransmissions and ACKs?

        d) If a fragment of an application message is lost,
           [how] should OPES processor signal that to the callout
           server? A loss can happen when adapting lossy application

        e) Do we need to group processing of a single application
           message together using OCP transaction concept? Should
           OCP transaction mean something else? Do we need OCP
           transactions at all?

Detailed responses are inlined below.

On Thu, 3 Apr 2003, Reinaldo Penno wrote:

Humm...I'm really not sure about that. I guess the sysadmin can
learn it, but it would be much easier and debug friendly (if
something goes wrong) to have capability negotiations.

IMO, auto-negotiation would be "better" for admin if and only if there
is a single match between OPES processor and callout server
capabilities. If there is no match, auto-negotiation will not help and
may hurt (the problem will more likely to be unnoticed until
run-time). If there are several matches, auto-negotiation may hurt
because it will often select the wrong match.

Not to mention that as OPES services become more "rich" a lot of new
capabilities will appear.

Yes, but you are also implying that a single service and a single OPES
processor will poses many capabilities and that there is always a
deterministic way to auto-negotiate the best match. I doubt that is
going to happen and I doubt it is the best design.

I think our disagreement is clear. I do not have a strong opinion here
(I just would prefer to keep protocol simple if negotiation benefits
are uncertain). Let's hear other opinions. If I am the only outlier,
we will support auto-negotiation.

This is my (lean) suggestion using your definitions:

"  An OCP transaction is a logical exchange of OCP messages.
   A OCP transaction starts with a xaction-start message, followed
   by zero or more data-xxx messages, zero or more appl-xxx and ends
   with a xaction-end message. "

Here I disagree. The whole purpose of having transactions is
to group together processing of a single application message
(which is a defined term). That grouping is not strictly
necessary but it brings structure and is actually required by
the OCP requirements draft (see the "3.3 Callout
Transactions" section).

I guess this is more of a performance/device intelligence
discussion. My first point is that having to send xaction-start/stop
for every single application message is quite an overhead. Actually
this might be the biggest overhead I've ever seen. If a HTTP message
fits into a TCP packet we will have to process 2 extra packets for
every single "real" (for the lack of a better word) packet.

Not at all. The <xaction-start/stop> overhead is probably just a few
bytes (it would be known when we decide on transport binding and
encoding). These "opening" and "closing" messages will usually fit
into the same OCP/TCP packet that carries application data (OCP's
data-have message).

The other point I would like to make is that having to determine start/end
of application messages means that the OPES processor will need to
understand all these applications, which is not necessarily desirable.

Not exactly. Recall that it is up to the OPES processor what to call
an application message! However, I strongly believe that in most cases
OPES processor will know the application protocol just because such
knowledge is usually required to proxy an application protocol
correctly. Moreover, parsing an application message will often be
desirable because many services are likely to be application-agnostic
(e.g., the same virus filter service can be applied to HTTP and SMTP
payloads and, ideally, it should not care about HTTP and SMTP

Let's suppose I have a callout server that deals with content
filtering. The OPES processor gets TCP packets with HTTP in it
(fragmented or not) and ships them to the Callout server, which will
reconstruct the HTTP message if needed and send it back to the OPES
processor. The OPES processor then decapsulates the IP/TCP/HTTP from
OCP and sends it on its way.

OCP allows OPES processor to declare a TCP packet on an HTTP
connection an "application message". Such a low-level OPES processor
will not be able to handle high-level rules and will only work with
OCP services that can handle raw TCP packets as input, of course, but
it is possible!

IMO, we may want to assume that OPES processor is an application-level
proxy not a transport-level gateway, but others may feel differently.
Does architecture draft answer that question? If we are working on an
application-level proxy, then there is no such thing for us as a "TCP
packet", I guess.

The same example would apply for RTP where I just want to send
packets to the callout server for content adaption. I would not like
to have to implement a RTP/MPEG-1 decoder in my OPES processor to
know where application messages end/start.

(a) you do not have to because you define what the application
    message is
(b) AFAIK, RTP message boundaries can be easily determined without
    any MPEG knowledge or decoding; just like HTTP message boundaries
    can be determined without understanding the content encoding
    of the payload.

Finally, let's suppose the first packet of a video session was lost and I
can recognize that. Since there is no retransmissions, what should I do
since I can only send whole application messages to the Callout server?
I will never see that first packet again.

I do not think there is any requirement to send whole application
messages to the Callout server. Both OCP draft and OCP requirements
draft speak of application message fragments. There should be nothing
important in the current OCP draft that prevents an OPES processor to
forward partial messages to the callout server. We do need to document
such possibility though (and agree on what "offset" values should be
in that case).

In the case of HTTP I would need to wait for a retransmission before
I can send anything to the callout server. And who can guarantee me
that I will ever receive everything?

Not sure I follow. In case of HTTP over TCP would would not even know
that a packet got lost unless you sit on a TCP level (and then HTTP
becomes irrelevant)!

I'm still working on the frequency that you can haver more than one
application message into one transaction.

Yes, we must reach an agreement on what OCP transaction is and whether
we need it at all. I was simply working based on the OCP requirements
draft, but I see no problem revisiting the issue now, when we can talk

Just say "400 Bad Request"...or "400 Malformed Message"

As Oskar argued, the semantics of "400 Malformed Message" may
be different from "400 Malformed Message, and the error on
this side was so bad that you probably want to delete all
state associated with this transaction even if that state
looks valid to you". We could have a special code for the
latter, but we would need it for every error response;  that
is why Oskar proposed special *-abort messages and I
suggested a special "error" flag.

I'm not going to argue too much about this. I will just say that
other text/binary based protocols have been working in a while
without the need for a error flag. This is an infinite cycle. Say,
how about it is a really nasty error, worst than "so bad"? Should we
have a flag for that?

If it is bad (that is worst than 400), but not worst than "so bad"?
You know where I'm getting at?

Yes, I do. I agree that the current solution is not perfect, but
suggest to wait for more input from Oskar.



<Prev in Thread] Current Thread [Next in Thread>