ietf
[Top] [All Lists]

Re: Last Call: SMTP Service Extension for Content Negotiation to Proposed Standard

2002-07-06 00:41:57
--On Wednesday, 03 July, 2002 22:59 -0700 
ned(_dot_)freed(_at_)mrochek(_dot_)com
wrote:

I, at least, would find such comments helpful.  But I'd also
like to hear comments from at least one person responsible
for a general-purpose MTA (as distinct from a fax-over-email
engine) who expects to implement this feature and how they
propose to do so and how this feature fits in.

There are two cases here: Client and server implementations.

I definitely plan to implement the server side of this
proposal. From that perspective all of the proposed variants
are easy to implement as long as you have a source of conneg
information to tap. I plan to use an LDAP attribute for this
purpose.
...

Interesting.  Instead of "ask the receiving client", you would
rely on a database that proported to know the client's behavior
and capabilities.  That certainly makes implementation easier
and might also deal with the relay situation by permitting the
first (and subsequent?) relays to ask the database rather than
replying "no clue about the client, I don't deliver to it" (the
latter is one of the cases I've been concerned about).  However:

(i) It would seem to me to be reasonable to write this option
down (as an option) rather than leaving implementers guessing.

Writing it down would be fine with me, although to be honest this struck me as
the obvious way to do it. OTOH, there certainly have been cases in the past
when I refrained from documenting what I thought was obvious in one of my own
drafts, only to find it was nothing of the sort.

(ii) Given the notorious "wks" experience, with trying to put
target host capabilities into an ancillary database, it seems to
me that this would further justify experimental status so that
we can get some sense as to whether the database would be kept
adequately up-to-date.  Certainly our experience should cause us
to be skeptical about that.

I think the differences between this and wks far outweigh the similarities.
Specifically, the major problem with wks records was that the authority
over/responsibility for wks records and authority over/responsibility for the
aspects of systems wks describes tended to be disjoint. Back in the days when I
maintained a reasonably heftly DNS zone I never had enough time to keep wks
records up to date with all the changes the various sysadmins kept making, and
after a while I stopped trying. And of course the sysadmins didn't have the
means to maintain this information in the DNS themselves.

The situation with an LDAP directory used for messaging is quite different.
Such directories often already maintain a bunch of user-settable information
(e.g., filtering rules, personal address book) and typically provide the means
(e.g., a web interface) for users to maintain and modify that information
themselves.

Of course having such a capability doesn't mean users will actually take the
time to use it. I suspect that's going to be very dependent on both need and
the quality of the user interface; presenting raw media feature sets definitely
isn't the way to go ;-)

The client side for a general-purpose MTA is much harder, but
not because of any aspect of the protocol. The issue I have
there is how to match up an arbitrary set of site-provided
conversion operations with the current group of recipient
feature sets. I don't see a good solution to this other than
making the whole thing something the site specifies, and if I
do that I see little chance of the capability actually being
used by enough sites to justify the cost of implementation.

One final comment. A general-purpose MTA using arbitrary
site-provided conversion routines doesn't have the luxury of
being able to bound the performance or other characteristics
of those conversion routines. This makes the entire
convert-on-the-fly scenario problematic in ways that don't
arise for a device built around a specific, limited and well
understood set of on-the-fly converters, regardless of whether
that device is a fax machine or something else entirely.
Indeed, if I were to implement based on past bad experiences
with various general-purpose conversion routines I'd be likely
to break the problem down into three steps: (1) Collection of
conneg information online, (2) Message splitting and
conversion offline, (3) Sending the resulting emssages to
their destination. I doubt I'd support using REQUIRED in (1).

I can only agree with both of these comments and complement you
on the clarity of your explanation of them.

Thanks!

I had a couple of other thoughts about this over the past couple of days:

(0) There are/will be additional ways to obtain conneg information: In
    DSNs or MDNs, through LDAP queries, via RESCAP, or through direct
    database queries within a single mail system. The database/directory
    mechanism can be used to advantage in all of these cases.

(1) Caching of conneg information is certainly possible and may even
    be desireable. Indeed, conneg information returned in DSNs or MDNs will
    have to be cached in order to be useful. This document doesn't discuss
    caching, and while caching isn't required in the ESMTP context some
    discussion of it nevertheless makes sense.

(2) The use of caches really opens the door to conneg in the context
    of relays in ways that nothing else does. For example, imagine a relay
    that doesn't do any content transformation of its own but does do forward
    conneg requests to populate a cache. It then offers the information it
    has in that cache when it acts as a server. This may sound kind of cool
    at first glance, but as the cache comes and goes it would tend to
    violate the least astonishment principle in fairly major ways. 
    Issues also arise if multiple relays are used, each with a separate
    cache. Care is needed here to preserve the ability to "rack and stack"
    mail relays without undue consequences.

    This isn't a typical cache since the goal isn't to minimize the number
    of forward queries that are done. The intent here is to provide the
    information from those queries in a completely different context.

    The variability problems here can be solved by going to either
    extreme: Ban caches of this sort entirely or insist that components
    that actually perform content transformations cache conneg information
    for a fairly long period. (It should be obvious that having intermediates
    cache information is risky and doesn't solve the problem.) At first I
    thought the right answer was to ban this stuff, but on reflection I think
    the use of DSNs and MDNs to return conneg information argue for a
    mandatory cache in the transformation agent.

                                Ned



<Prev in Thread] Current Thread [Next in Thread>