ietf-smtp
[Top] [All Lists]

Re: Keywords for "SMTP Service Extension for Content Negotiation"

2002-07-13 12:32:13

I think this is a fine piece of theoretical analysis which unfortunately 
has no
applicability to the real world. In the real world directory/database 
systems
that let sites provide Internet-wide access to information about their 
users do
not deploy.

Widespread deployment of LDAP as a public directory service was hampered
by several factors - despite the designers' intention that it be used
for publically accessible information it was widely hyped as a generic
database access protocol, so an organizations' LDAP servers were
typically filled with information that they wouldn't dream of making 
publically
available.  Also, there wasn't an obvious global namespace or infrastructure
that was easy for LDAP users to tie into, nor any widespread understanding of
why it would be a good idea to do so.

LDAP is but one example, and in some ways not a good one. There are plenty of
others.

None of this experience seems particularly applicable to the question at hand,
where we are comparing two different mechanisms for exporting only a small
amount of information - SMTP, or some other protocol (of which LDAP is only a
token example).  If an enterprise is reluctant to export "information about
their users" in general, then they will be about as reluctant to do so via 
SMTP
as via some other protocol.

I'm afraid this is another piece of theoretical considerations that fly in the
face of actual experience. The obvious counterexample is HTTP.

I do think it's reasonable to assume that any other protocol server is likely
to be viewed as a separate maintenance burden that needs to be justified, 
whereas
a wart on the side of SMTP will probably be viewed as just some incremental 
pain
that must be endured.  But if large sites end up needing a separate database 
server
anyway to support CONNEG because the MX servers don't have direct knowledge of
recipient capabilities, then that separate maintenance burden is there anyway,
at least for those sites.

While I obviously cannot speak for all large sites, I can tell you that
database usage is sufficiently pervasive inside the sites I'm familiar with
that this is basically a nonissue.

I also believe that trying to use LDAP for this purpose would be a marketing
disaster - because there is already such a mindset about what LDAP is used 
for,
and what kind of data goes into it, that any proposal to expose this specific
piece of information using the LDAP protocol will immediately get attacked
by people who are concerned that other informaion often stored in LDAP will be
inappropriately leaked.

Nowhere did I mention using LDAP to expose this information. So this appears to
be a complete strawman argument to me.

The IETF has tried to tackle this problem several times, and indeed 
continues
to do so now in the RESCAP WG. But I don't think anyone seriously believes 
that
even if RESCAP produces some specifications it is going to succeed at its
intended scale. Frankly, the silence is pretty deafening over there.

I suspect the lack of significant interest in rescap is because it lacks a
killer app - there are many uses for rescap but no one application for
it sees a significant enough win to drive the people who work on that
application to contribute to rescap.   That and each of those uses involves
some deployment pain - and the first one involves significant pain, so nobody
wants to be first.  Another way of saying this is that (at least in the apps
area) we tend to divide our work around product lines - we tend to work on
email, fax, voice mail, web replication, instant messaging and so on - and
rescap cuts across all of these.

(on the other hand, any technology that is initially driven by a narrow set
of interests tends to get warped to serve that set of interests at the
expense of others, even if the technology was generally useful.)

I'm not sure whether this is good or bad, nor whether it is inherent or an
artifact of our organizational structure.   Mostly I view this as a given.
There are exceptions, but they're rare, at least in the apps area.

But there's nothing stopping fax people, or email people, or voice mail
people from considering a standard protocol for the lookup of recipient
capability information and (separately) from considering whether that
protocol should be accessed directly or through SMTP.  And while I suspect,
intuitively, that accessing that information via SMTP is actually a
better way to go overall, I'd like to see some more analysis on it rather
than a quick dismissal based on experience with LDAP.

First, this is not a quick dismissal, it is a conclusion based on decades of
experience, experiencce in which LDAP only plays a small and relatively recent
part.

Second, I believe the notion of using a directory for this sort of thing has
been discussed in the FAX WG on several occasions. And while it is axiomatic
that nothing precludes the WG's having chosen to use a directory, nothing
requires that a WG pursue it either. I take the rather stunning lack of
interest in this overall approach as yet another sign that it is seen as
nonviable.

for instance:

One attractive thing about using SMTP is that SMTP servers for small
sites can implement CONNEG without a separate server, so the cost is
kept low for small sites.  Another one is that if there is a separate
server then it's probably easier for the mail system administrator to
configure his/her MTAs to talk to that server (and to fix bugs in
that configuration) than for the mail system administrator to configure
DNS so that external queries go to that server.

If queries are made via SMTP then the interface to the database is not
standardized, so MTA vendors end up supporting multiple interfaces.
Some might see this as an advantage, since it allows use of a variety
of existing back-ends.  Actually enough MTAs support mail forwarding
via LDAP (or other database) that getting recipient capabilities from
the same source is probably a straightforward extension.

There's no "probably" about it, although it is also true that more generic
callouts aren't exactly rocket science to implement.

And assuming that the SMTP client-SMTP server connection has a
significantly longer delay path than the SMTP server-database connection
(which seems reasonable) then it could take less time overall to deliver
the message via SMTP alone than via SMTP + a database query, provided
the queries can be made in the same SMTP session as when the message
is delivered.

I think there are benefits that favor of a rescap-like approach also.

One is that it allows the conversion to be done by the sender's
MUA - or at any rate as early as possible - whereas with SMTP
this is not always possible because a number of sites either
block access to port 25 on nonlocal servers, or impose interception
proxies that pretend to be the remote server even if it's really a
relay or firewall.  I'm very much in favor of having negotiation
and transformation work predictably and with accountability to the
sender rather than having some random MTA in the chain decide that
a conversion is needed.

I agree that predictability is nice, but that's not something any approach can
ever guarantee. Indeed, I can argue that a directory approach will lead to less
predictability, not more, if for no other reason than directory access will
tend to work locally but not remotely. SMTP access with appropriate caching, on
the other hand, has the potential to work both locally and globally.

                                Ned

<Prev in Thread] Current Thread [Next in Thread>