Defining it this way should guarantee a very strict and
clear protocol
message syntax that avoids those complicated phrases like
"after sending
this, the client has to wait until blah blah" and this
should then have
a positive effect on protocol implementations that should
avoid that
messages on an open persistent connection run out-of sync.
"out-of sync"? I'm not sure what the problem is. Could you
elaborate,
because it's important that we all understand what the
requirements mean.
With "out-of-sync" I mean that either no more request can be sent on an open
connection because client and server are waiting endless time for an event that
the other peer will not trigger (again) or a connection shutdown because one
peer detected a syntax error by peeking into a wrong part of the data.
I know from ICAP that these things could happen. Two examples:
Due to the preview feature and the 204 response an ICAP server wants to stop a
message transaction early; it sends the 204 response and is prepared for the
next request that comes on this connection. Unfortunately it was too lazy in
being patient to wait and acknowledge the rest of the data from the first
message that it does not longer care about but the ICAP client is still
forwarding. The ICAP server tries to interpret that later part of the message
data as the next ICAP request header and fails: Server is not longer in sync.
Or another example from ICAP:
Due to the little complicated "ieos extension" indicating that a message is not
larger than the preview which was sent, an ICAP server could be buggy in a way
that waits for more data which is not coming. ICAP client and server wait
endless time for each other. I saw an implementation where this happened only
for the case in which HTTP body size was exactly the size of the preview.
2.
The same question from Markus points me to another feature
that got on
my wish list last year: Although I know that we want to
concentrate on
HTTP, section 3.11 encourages us to define a protocol that
could be used
for any application-layer protocol. And unlike most application
protocols that have one source and one destination, SMTP
has one sender
but many recipients. An OPES protocol that is prepared to
encapsulate
SMTP messages should be aware that a callout service may
have different
results for different recipients.
Therefore I'd welcome if the OPES protocol is able (or can
be easily
extended) to send multiple replies on one request.
Hmmmm - I'm not sure that there's a clear need for multiple replies,
interesting as it is. Is this example for OPES located between the
sender's mail agent and the SMTP server? Or for OPES located between
an SMTP server and the users, working on a message sent to a local
alias that is a list? Somehow the callout server (or the OPES box?)
knows the expansion of the alias?
This example is for an OPES box that works as an intermediate SMTP gateway. It
could be the sender's SMTP server itself ot a later gateway in the forwarding
chain.
If this box receives message for 20 users of a single domain, it is a bad idea
to create 20 copies of the message first and sending
one-sender-one-recipient-pairs to the callout service when maybe 18 copies are
filtered with the same settings but 2 copies have a different filter result.
I'd would like a protocol that allows the box to forward the SMTP message as is
and the callout server to reply "Here is a modified message for 18 recipients
and a second message copy for the other two recipients following right after."
[...]
I think about a very lightweight but still powerfull and flexible
protocol, building on components of HTTP/HTTPS.
Only knowing a little about SOAP and BEEP, I think that
they are too
complex to define a direct base for the OPES protocol,
although there
seem to be very interesting concepts which could be integrated.
SOAP might be considered complicated, but BEEP seems to be an entirely
different and simpler kind of thing. Possibly simpler than HTTP.
I would like an even simpler basic message structure. Please see my
argumentation in my response to Robert.
Martin