ietf-openproxy
[Top] [All Lists]

RE: transfer- and content-encoding

2003-10-13 21:59:25

On Mon, 13 Oct 2003, Robert Collins wrote:

Well the HTTP errata removed the 'identity' Transfer coding. I don't
see any reason to reinstate it.

The reason for documenting "identity" encoding tag is to express the
lack of support for identity encoding. For example, a particular
service may want to receive all data using some custom transfer
encoding. I doubt such design is worth supporting though. That is,
this reason is probably not good enough. A service that requires
custom encoding can probably use content-encoding instead or rely on
manual configuration rather than run-time negotiation.

Secondly, the prohibition against double encoding doesn't seem
required to me: Chunked has explicit rules (*) - only one
application, and always the last one. Other transfer codings may
have their own rule. As a counter example, one may wish to use a
transfer coding that provides delta-information, and use this twice
(say working of different basis objects).

This is debatable. I would say that two delta encodings with different
base objects are two different encodings. On the other hand, there is
no pressing need to prohibit encoding repetitions where they are
allowed by HTTP.

As TE is hop by hop in HTTP, we need to ensure that any OPES
processor's TE field passed to upstreams, and TE field passed in
responses to client (for them to decide on upload
Transfer-Encodings) is the processors capabilities, not the actual
client / origin respectively.

Proposed TEs headers are exchanged among OPES agents only and are not
passed to HTTP agents. It is an OCP header, not an HTTP header. We
cannot assume that OPES agents will control HTTP headers when adapting
HTTP payloads. This caveat is at the core of our problems here.
Consider an virus scanning service -- it should not affect TE headers
on the "outside" wire, but it should be able to handle any common
content that the corresponding HTTP intermediary proxies. That is, OCP
agents should be able to handle a variety of common transfer encodings
without being able to affect "outside" encoding negotiations.

Once we do that, we know that an OPES processor will only receive
codings it can handle, so we can say MUST reject with a 5xx error
(sorry to lazy to dig up the best match) on an unhandlable
transfer-coding.

HTTP proxy capabilities may be different from an attached OPES
processor capabilities (which, in turn, may be different from an
attached callout service capabilities). This is true for
Transfer-Codings and for some other features. We cannot simply assume
that OPES processor is an HTTP proxy, even if it adapts HTTP messages.

With that in place, the interaction from processor to processor can
'trivially' follow the HTTP Transfer-coding negotiation rules. That
is, from a protocol viewpoint, all transfer-codings must be removed
and applied anew across hops. By definition - implementations can
shortcut this when a compatible transfer-coding sequence exists
across the relevant hop.

I agree with the above. However, we still need to provide a
negotiation mechanism for OPES agents to agree on the actual transfer
encoding to be used. We cannot rely exclusively on HTTP specs because
our agents, especially callout service, may not be HTTP agents. For
example, many callout services will work with message payload and
disregard any HTTP headers; those services will be very sensitive to
encoding issues; they may not, for example, support chunked encoding.

XXX: we need to document HTTP Adaptation scope. Does the draft apply
only to agents that are HTTP compliant? Or to any adaptation of an
HTTP messages, including those where the service is not HTTP-aware? Or
something else?

There are quote a few sane options available to us, depending on what
encodings have to be supported and on what to do with custom
encodings that an OCP agent does not support. Given your feedback and
the multitude of options we face, I would propose the following:

        0) Do not negotiate Transfer-Encodings at all.

        1) An OCP agent sending data MUST remove all
           transfer encodings it supports. If any encodings remain, an
           OCP agent sending data MUST specify remaining encodings
           using the Transfer-Encoding parameter of a DUM
           OCP message.

        2) If an OCP agent receives Transfer-Encoding parameter
           indicating unsupported encoding, it MAY terminate
           the corresponding OCP transaction.

Do you think the above rules create any interoperability problems that
more complex rules can eliminate?

Can we think of a realistic-enough example where removing supported
encodings is bad for performance reasons? Note that an agent may be
_configured_ to leave certain encodings -- that qualifies as lack of
support for their removal. Perhaps the above "MUST remove" can be
rephrased to better reflect this caveat?

Do the above rules still allow a callout service to form a chunked
HTTP response it order to, for example, indicate service progress on a
persistent HTTP/1.1 connection? Or should we not expect any processor
to be able to handle chunked responses from a callout service? Should
we add the following rule?

        1a) OPES processors MUST support chunked transfer coding
            when handling data send by an OCP server.

Thanks,

Alex.