I think this is a fine piece of theoretical analysis which unfortunately has
no
applicability to the real world. In the real world directory/database systems
that let sites provide Internet-wide access to information about their users
do
not deploy.
Widespread deployment of LDAP as a public directory service was hampered
by several factors - despite the designers' intention that it be used
for publically accessible information it was widely hyped as a generic
database access protocol, so an organizations' LDAP servers were
typically filled with information that they wouldn't dream of making publically
available. Also, there wasn't an obvious global namespace or infrastructure
that was easy for LDAP users to tie into, nor any widespread understanding of
why it would be a good idea to do so.
None of this experience seems particularly applicable to the question at hand,
where we are comparing two different mechanisms for exporting only a small
amount of information - SMTP, or some other protocol (of which LDAP is only a
token example). If an enterprise is reluctant to export "information about
their users" in general, then they will be about as reluctant to do so via SMTP
as via some other protocol.
I do think it's reasonable to assume that any other protocol server is likely
to be viewed as a separate maintenance burden that needs to be justified,
whereas
a wart on the side of SMTP will probably be viewed as just some incremental
pain
that must be endured. But if large sites end up needing a separate database
server
anyway to support CONNEG because the MX servers don't have direct knowledge of
recipient capabilities, then that separate maintenance burden is there anyway,
at least for those sites.
I also believe that trying to use LDAP for this purpose would be a marketing
disaster - because there is already such a mindset about what LDAP is used for,
and what kind of data goes into it, that any proposal to expose this specific
piece of information using the LDAP protocol will immediately get attacked
by people who are concerned that other informaion often stored in LDAP will be
inappropriately leaked.
The IETF has tried to tackle this problem several times, and indeed continues
to do so now in the RESCAP WG. But I don't think anyone seriously believes
that
even if RESCAP produces some specifications it is going to succeed at its
intended scale. Frankly, the silence is pretty deafening over there.
I suspect the lack of significant interest in rescap is because it lacks a
killer app - there are many uses for rescap but no one application for
it sees a significant enough win to drive the people who work on that
application to contribute to rescap. That and each of those uses involves
some deployment pain - and the first one involves significant pain, so nobody
wants to be first. Another way of saying this is that (at least in the apps
area) we tend to divide our work around product lines - we tend to work on
email, fax, voice mail, web replication, instant messaging and so on - and
rescap cuts across all of these.
(on the other hand, any technology that is initially driven by a narrow set
of interests tends to get warped to serve that set of interests at the
expense of others, even if the technology was generally useful.)
I'm not sure whether this is good or bad, nor whether it is inherent or an
artifact of our organizational structure. Mostly I view this as a given.
There are exceptions, but they're rare, at least in the apps area.
But there's nothing stopping fax people, or email people, or voice mail
people from considering a standard protocol for the lookup of recipient
capability information and (separately) from considering whether that
protocol should be accessed directly or through SMTP. And while I suspect,
intuitively, that accessing that information via SMTP is actually a
better way to go overall, I'd like to see some more analysis on it rather
than a quick dismissal based on experience with LDAP.
for instance:
One attractive thing about using SMTP is that SMTP servers for small
sites can implement CONNEG without a separate server, so the cost is
kept low for small sites. Another one is that if there is a separate
server then it's probably easier for the mail system administrator to
configure his/her MTAs to talk to that server (and to fix bugs in
that configuration) than for the mail system administrator to configure
DNS so that external queries go to that server.
If queries are made via SMTP then the interface to the database is not
standardized, so MTA vendors end up supporting multiple interfaces.
Some might see this as an advantage, since it allows use of a variety
of existing back-ends. Actually enough MTAs support mail forwarding
via LDAP (or other database) that getting recipient capabilities from
the same source is probably a straightforward extension.
And assuming that the SMTP client-SMTP server connection has a
significantly longer delay path than the SMTP server-database connection
(which seems reasonable) then it could take less time overall to deliver
the message via SMTP alone than via SMTP + a database query, provided
the queries can be made in the same SMTP session as when the message
is delivered.
I think there are benefits that favor of a rescap-like approach also.
One is that it allows the conversion to be done by the sender's
MUA - or at any rate as early as possible - whereas with SMTP
this is not always possible because a number of sites either
block access to port 25 on nonlocal servers, or impose interception
proxies that pretend to be the remote server even if it's really a
relay or firewall. I'm very much in favor of having negotiation
and transformation work predictably and with accountability to the
sender rather than having some random MTA in the chain decide that
a conversion is needed.
Another is that the mechanism could be used for other things besides
just recipient capabilities - say to distribute a recipient's
public key, to opt out of spam, etc. These could also be provided
via SMTP but some of them absolutely require access by the sender's
UA to be useful.
Keith