It does make sense to have an generic "protocol for encapsulating
another protocol". At the work, SOAP was suggested. A few of
us have discussed defining a more minimalist approach using
ICAP comes from a different design philosophy, one that says
that some protocols are best encapsulated within themselves.
HTTP with header extensions is almost enough. ICAP 0.9
could be implemented within an HTTP server; ICAP 1.0 stepped
a hairsbreadth over the line and requires minor server
changes (newer servers may have enough power to accommodate
things like ICAP).
Should ICAP be extended to handle more protocols and become
the "protocol for encapsulating protocols"? I don't think so, because
it has been nicely tuned to HTTP.
If a protocol has enough capabilities for self-reference, you can
build a "proxy plane" version of it out of existing machinery. That's
usually good, especially if you don't have to marshall the data
twice. On the other hand, viewing it from the proxy viewpoint, it's
better to have one set of encapsulation routines and use those
for all protocols.
Another design consideration is how much information the
encapsulation needs to carry about the inner material. If the
naming scheme and other protocol dependent parameters
must be part of the encapsulation scheme, then the machinery
for handling the encapsulation may have to recapitulate much
of the original protocol anyway.
For something as simple as SMTP, perhaps either approach
is viable. For RTSP and its associated RTP streams, it's
"Wilbert de Graaf" <wilbertdg(_at_)hetnet(_dot_)nl> 06/08/01 07:59AM >>>
Going through the ICAP specification, I wondered why ICAP is limited to
HTTP. I understand it's focus is on HTTP, but I think it's useful for other
protocols as well. For instance NNTP and SMTP.
Would it make sense to incorporate these other protocols as well ?
Btw. I do know that SMTP is hardly found 'on the edge' but ICAP could
definitely be useful there (eg. sieve, virus scanning, ).