ietf-smtp
[Top] [All Lists]

Re: Requesting comments on draft-cheney-safe-02.txt

2009-08-02 12:44:48

Peter,

The problem only exists in the realm of HTTP due to where code
actually executes for interaction.  HTTP is certainly not part of the
problem, but it also cannot be a part of the solution due to the
simplicity of its design.

What aspect of SMTP makes is superior to HTTP in this regard?

On the HTTP side interactive code MUST execute at the user-agent in
order to be interactive.  This is so because in HTTP resources are sent
to a user-agent for execution.  In SMTP the server is not a terminal
point in a transmission.  In SMTP processing upon a communication can
occur prior to that communication reaching its intended destintation and
that processing may include a response.  This feature is not possible in
HTTP since the server is always the destination.


The user-agent cannot even control if the code executes automatically
upon rendering of the page except to completely disable execution of
all client-side code.

The last sentence is wrong (some browsers do implement ways to
restrict client-side codes by different criteria), and in any case
that doesn't seem to have anything to do with the protocol used to
fetch the (executable) contents: Neither HTTP nor SMTP care about the
contents, they just transport a header and a blob of data.

The only standard for limiting client-side execution is in preventing
execution or data access to a domain different than the resource that
requested it.  Such a limitation does not prevent client-side code from
execution, but merely limits the result of that execution where the
limitation is of concern.  So that in mind, my statement is still
technically accurate.  All client-side code will execute upon load of a
page or in accorance with an event unless execution of all client-side
code is disabled from the user-agent.


What is gained by adding intermediate systems?
Intermediate systems offer a point of execution for code

You are contradicting yourself:

2) The server that owns the code is the exact point of execution for
that code.

So the code is not executed on one of the intermediate systems but on
the server which hosts the code. The intermediate systems just pass
through messages.

If the server that hosts the code is an intermediate system then I have
not contradicted myself.  This is suggestion made in the internet draft.
SMTP servers are intermediate systems between the originating user-agent
and the destination address.


Technically speaking HTTP is also unidirectional.  HTTP typically
operates with a GET request

The POST request is also well-known and frequently used.

Yes, but that is not the point.  Even with a GET request data can be
supplied back to the server by appending data to the query string of the
URI.  In that case and with POST instructions are being sent to the
server in expectation that a resource request is designated so that the
server may respond.  The actual protocol is behaving in a unidirectional
manner even if additional software supplied on the server can process
this additional data to add a perception of bidirectionality.  The
protocol itself does not care what data is supplied through such a
glorified GET or POST.  The protocol only cares about what resource to
send in response to a request.  The additional help supplied to the
protocol from server-side software certainly adds the perception of
bidirectional data processing but that is not a feature of the protocol.

As a result HTTP is definately unidirection and SMTP is bidirectional.
SMTP has an expectation of bidirectional data transfer in that error
messages generated from the server may be sent to the originating
address and destination address simultaneously.  SMTP can be
multidirectional provided this same simulation occurs where multiple
domains are immediately involved, such as a server sending the error
message to the originating address, destination address, and maintenance
address on a different domain.  Such multidirectional transmission is
not inherent to the proper function of the protocol, and the SAFE model
only requires bidirectional transmission.


If you don't allow user-triggered events to be processed, what is the
advantage compared to traditional server-side scripting?

on the web the server is the destination of the transmission.  Once data
is sent to the server its use is terminated unless it is resent to the
client.  In the SAFE model data can be interpreted prior to reaching the
destination more similar to client-side form validation using the
onsubmit event.


As somebody else already asked: What would be a typical use case for
your protocol?

The biggest difference between email and the web is that email is
inherently private and the web is inherently public.  The laws in The
United States reflect that observation.

The first example that comes to mind is ecommerce with a shopping cart,
form validation, suggested upselling, and personalized content and
preferences.  Since email is inherently private the user has a higher
expectation of confidentiality and the vender has greater freedom to
use data supplied by that user so long as the data is not transfered to
a third-party without express consent of both the user and the vendor.
Shopping is not a public venture, and so it has no reason to exist on
the web except that conventions do not yet exist for allowing shopping
on a private medium, such as email.

A second example is in consideration for adverting as a business on WWW.
Web sites that are entirely dependent upon traffic in order to generate
advertising revenue are operating under a failing business model.
Advertising costs have been steadily increasing while revenue returned
from those ads has been decreasing almost proportionally.  Web sites
that offer a targeted product or that feature a targeted audience are
not experiencing these declines.  For several years it has been believed
the only solution to this decline is targeted advertising.  Directly
targeting users' interests and surfing habits in an attempt to supply
relevant advertising has been ruled a privacy violation on the web in US
federal court.  In email, however, such targeting is perfectly legal so
long as the data collected is not transfered to a third party.  That is
an important consideration for adaptive business models even before
consideration for advertising.

If business solutions can be leveraged to use privately supplied data to
increase revenue such solutions would displace reliance upon advertising
as a primary consideration for revenue generation.


Thanks,
Austin