Are "intermediaries" intended to include interception proxies?
I was under the impression that they were specifically out of scope, and
that the reason for that choice would also exclude other (lower layer)
intermediaries from scope. (This is related to a point you make later
regarding awareness vs. consent.)
it would be helpful if that were clarified in the charter.
it's one thing if we're just talking about explicitly configured
(at client or server end) proxies; quite another if we're talking
about proxies that intervene without a user's consent.
Re: "application data transported by HTTP" -
Does this mean that this group's purview is limited to things that use
the HTTP protocol?
Are we now trying to encourage people to layer applications over HTTP,
in spite of the concerns outlined in draft-moore-using-http-01.txt ?
I'm not sure I understand what you're asking Keith. Is your point that by
limiting OPES to HTTP only that act will encourage developers to layer
things on top of HTTP?
there are two separate questions.
First, I cannot tell whether "transported by HTTP" is intended as a constraint
on the group's activities.
Second, it appears that this group's charter presumes that layering
arbitrary applications on top of HTTP is a Good Thing, and I would take
strong issue with that presumption. But if the group is really only
trying to add some flexibility to HTTP's proxy and cache mechanisms,
for traditional uses of HTTP, that seems somewhat less dangerous.
Why is it (apparently) sufficient that one of the end points "be aware"
that the data path is not transparent, rather than "consent"?
Is this group really going to legitimize arbitrary corruption of the
data stream by intermediaries which the endpoints do not control,
just so long as one of the end points is "aware" of the corruption?
My understanding was that either party would have consented. I think this
one is just down to choice of words.
it's a very important distinction. IMHO, awareness is not enough,
explicit consent is essential.
It also seems like there should be strict constraints on when this technology
should be used. If we do this work, will we be saying that it's legitimate
for ISPs to rewrite HTTP requests or responses for better traffic
utilization?
How about to insert or delete advertisements, or to hide content that the ISP
doesn't want the user to transmit or receive?
I'm not sure that publishing an RFC of any kind can dictate actual
constraints of when the technology should be used, but I understand your
concern.
we have no means of enforcing such constraints, but we do sometimes impose them.
it is useful to communicate expectations of how the technologies are used.
we are very fortunate that most ISPs have no so far seen to take arbitrary
liberties with network traffic, so we can still deploy some new applications.
as I said, we can't stop them. but the last thing we need to do is to
encourage
ISPs to interfere with network transparency.
As for the latter point the group is, I believe, suggesting that the case
of an ISP modifying requests/responses for those reasons is covered under
the consent of the end user (since they have a contract with the ISP and
that such practice must be documented in that contract).
so? this self-appointed "group" doesn't even represent the interests of the
diversity
of protocol implementors within IETF, much less the interests of the Internet
user.
We're definitely on dangerous ground regarding "layer > 7" stuff here.
it's far better to discuss such "stuff" explicitly than to pretend like it's
not
important, or that it will take care of itself.
even on a purely technical level, the utility of the Internet is constantly
being
eroded by folks who think that it's okay to perform arbitrary operations on
third-party traffic. rather than having a network with predictable
characteristics
on which a wide variety higher-level services can be layered, we are getting
a network which supports only a few services, and those in an unpredictable and
arbitrary manner.
Keith