ietf-openproxy
[Top] [All Lists]

RE: transfer- and content-encoding

2003-10-14 03:44:33

Hi,


...

There are quote a few sane options available to us, depending on what
encodings have to be supported and on what to do with custom
encodings that an OCP agent does not support. Given your feedback and
the multitude of options we face, I would propose the following:

      0) Do not negotiate Transfer-Encodings at all.

      1) An OCP agent sending data MUST remove all
         transfer encodings it supports. If any encodings remain, an
         OCP agent sending data MUST specify remaining encodings
         using the Transfer-Encoding parameter of a DUM
         OCP message.

      2) If an OCP agent receives Transfer-Encoding parameter
         indicating unsupported encoding, it MAY terminate
         the corresponding OCP transaction.

Do you think the above rules create any interoperability problems that
more complex rules can eliminate?

Can we think of a realistic-enough example where removing supported
encodings is bad for performance reasons? Note that an agent may be
_configured_ to leave certain encodings -- that qualifies as lack of
support for their removal. Perhaps the above "MUST remove" can be
rephrased to better reflect this caveat?

Do the above rules still allow a callout service to form a chunked
HTTP response it order to, for example, indicate service progress on a
persistent HTTP/1.1 connection? Or should we not expect any processor
to be able to handle chunked responses from a callout service? Should
we add the following rule?

      1a) OPES processors MUST support chunked transfer coding
          when handling data send by an OCP server.


I think removal of transfer encoding by the OPES processor will work and will 
probably simplify everything.
If we do this, then adding chunked coding by the callout service seems to be 
odd to me.
Forcing an OCP client to handle chunked transfer coding is maybe unfair; it it 
is a simple HTTP/1.0 proxy that is so far unaware of chunked transfer encoding, 
why should it support it? It will need to remove it again before sending on the 
HTTP path and it is technically not needed. In ICAP/1.0 we need chunked 
transfer encoding to track the message body length in OCP we do not need this. 
So there was a reason to force an HTTP/1.0 ICAP client to support chunked 
coding on the ICAP path but there is only little reason to do this for OCP.

The motivation you give to do this here is very important though.
Let's for now look at the typical case where the OCP client is a HTTP/1.1 proxy 
and talks to a HTTP/1.1 client and can therefore use chunked transfer encoding.
Some callout services modify the HTTP body and do not know at the time of 
header processing how long the content will be.
In terms of low latency and ability to keep persistent connections, it is best 
if the OPES processor then uses chunked transfer coding when talking to the 
HTTP client.
I am concerned that many OCP client implementations will not implement this. 
Callout services that do this kind of content modification are the more 
reliable place for that transfer coding introduction, unless...

Unless we find another good way to highlight this aspect in the specs.
Reminds me that we had parts of this discussion in early June.
I remember that we agreed that the OPES processor is responsible for correct 
headers and that we can use the sizep parameter to transmit a known content 
length.
There we ended with two options I remember:

    i) OPES processor MUST ignore Content-Length header
       and depending on existence of sizep parameter
          - adjust the Content-Length header
          - introduce chunked transfer coding
          - collect all data before sending on
          - close the connection at the end
       We can now add: it MUST also ignore Transfer-Encoding header
       because OCP message do not have transfer encodings
       If we document this aspect and make ignoring the Content-Length
       header a MUST, OPES processors are forced to deal with this problem
       and are likely to find a good or acceptable implementation for it.

   ii) Add a data-you-check message which will then mean that the headers
       sent with a data-use-mine message are correct, including Content-Length
       and Transfer-Encoding.
       I do not like this so much, because it forces the OPES processors will be
       again in the dilemma to trust DUM messages but still having 
responsibility
       for correct headers. And it will only half way solve the 
Transfer-Encoding
       problem (callout services may still introduce).

So, here is my today's proposal (sorry for changing my mind quickly ;-)
I tries to combine your proposal with option i) above.

        0) Do not negotiate Transfer-Encodings at all.
 
        1) An OCP agent sending data MUST remove all
           transfer encodings it supports. If any encodings remain, an
           OCP agent sending data MUST specify remaining encodings
           using the Transfer-Encoding parameter of a DUM
           OCP message.
         A new transfer encoding MUST NOT be applied to a message.
 
        2) If an OCP agent receives Transfer-Encoding parameter
           indicating unsupported encoding, it MAY terminate
           the corresponding OCP transaction.

        3) If the Content-Length is known by the callout service, a sizep
         parameter MUST be added to all DUM messages.
         An OPES processor MUST ignore Content-Length and Transfer-Encoding
         HTTP headers.
         XXX: Document the options it has:
          - adjust the Content-Length header
          - introduce chunked transfer coding
          - collect all data before sending on
          - close the connection at the end


Regards
Martin