wow...I'm not sure we fundamentally disagree about the outcome,
but this seems to contain several different kinds of confusion.
1. "intermediary": Lately it has become fashionable to use this term
as a catch-all to describe any network element between the endpoints,
acting at any layer of the protocol stack. Such usage tends to
gloss over several important issues, such as whether the intermediary
introduces layering violations (and the problems that come with doing so),
or whether the intermediary is acting with explicit consent of at least
one of the endpoints (and the associated issues with control of content).
So we shouldn't treat all intermediaries as if they are equally useful
2. The fact that there are emerging standards (by IETF, w3c or anyone else)
emphatically does not demonstrate requirements. Old issues of trade
magazines are full of references to standardization efforts that went
nowhere - often they were trying to solve the wrong problem, because
their work was obsolete before they started, of simply because they
didn't understand the problem they were trying to solve.
3. Similarly, the fact that the "market" has chosen a particular path does
not justify that choice as good engineering, and the IETF does not have
a responsibility to endorse bad engineering. Deployment in some form
is a necessary condition for an engineering solution to be successful, but
"market acceptance" is not a sufficient condition for engineering success.
The market is quite often wrong.
I don't doubt that it would be useful to have a standard mechanism for
modifying content (even in fairly arbitrary ways) before it is transmitted
to an audience, or to modify it on receipt by a member of that audience.
And you cite a number of valuable services that could take advantage of
such a mechanism. But the charter currently under discussion is a lot
more open-ended than that, and that's part of why it's controversial.
This debate over OPES appear sto have a blend of technology religion,
business interest, and even some hand-waving or other failures to
communicate at its core. I, too, can quibble with the proposed charter but
there is a need for a standard mechanism for calling services that operate
on http (and possibly rtp) messages at an [application-level] intermediary.
In general, the technical and industry requirements can be demonstrated by
the fact that there are emerging w3c standards and implementations for
constructing distributed applications by calling web services and,
specifically with respect to edge services, by the fact that there are
edge-of-network implementations that provide various transform services,
e.g., virus scan, language translation, and, yes, content adaptation based
on device and network capabilities. These aren't layer violations, some of
these applications are, however, aware of the protocol layers much the way a
management application can be layer-aware. These intermediaries exist and
they will continue to do so. The only question is are there standards.
The requirement for doing this at an application intermediary in a standard
way is the usual requirement for standards: the industry serves its
customers best when products interoperate. Standards for selecting the
services to be called (rules) and how to call them (service bindings and
protocols) will provide interoperability in constructing those distributed
applications. One of the areas for investigation in the working group
(assuming it gets chartered) is whether we need something unique to edge
services (a la icap) or whether something more general is appropriate (a la
I, personally, don't think the applications need to justify themselves
against a model; rather, models may provide insights in the presence of
reality. The problem is fundamentally one of scaling. How do we distribute
the work load, provide an improved user experience and do so with an
authenticated and authorized set of rules? If OPES can help answer those
questions, I'm not sure what the problem is. There may well be "better"
ways of building these distributed applications, but the market has chosen
these application intermediaries as an evolutionary step. It would be a
mistake for the IETF to abdicate its responsibility here.