ietf-smtp
[Top] [All Lists]

Re: rfc2821bis-03 Issue 35: remove source routes from example D.3

2007-05-01 14:28:58
On 2007-05-01 14:03:58 -0400, John C Klensin wrote:
--On Tuesday, May 01, 2007 6:52 PM +0200 Frank Ellermann 
<nobody(_at_)xyzzy(_dot_)claranet(_dot_)de> wrote:
Actually it should verify jones(_at_)xyz at the time when it decides
to accept the mail, but at this time xyz is down, and so that's
not possible.

that verification can be done even if xyz is down if foo has the
necessary information.

The verification business after it has accepted
the mail is pointless, it's too late, foo.com can only pray
that xyz.com eats its spam.

It seems to me that this is the key problem.   Even if we 
(temporarily) ignore the constraints on what can go into 
2821bis, the "backup MX" function -- a server that is not 
expected to receive mail for a particular domain unless the 
servers that normally support that domain are down or otherwise 
offline from the public network -- is an important, even if (one 
hopes) rarely-used, function.  Unlike some other uses of MX 
records, that situation poses both a technical and an 
administrative/ economic problem.  The first is essentially a 
race condition: no matter what measures are taken, if the relay 
(standby MX server) cannot establish connectivity to the server 
for the target domain in real-time, it has no possible way to 
know what addresses are valid there.  The very best it can do is 
to guess on the basis of knowledge obtained while there was 
connectivity.

True, but the set of valid addresses probably doesn't change much on a
server without connectivity.

So if the backup mx syncs frequently with the primary mx it should a
good idea which addresses are valid and which aren't. Frequently the
list of valid addresses isn't maintained on the primary MX anyway but on
a third system (customer database, central LDAP server, whatever), so
you can use the same mechanism to get updates to both MXs.

Some scenarios make that more difficult (e.g. some keyed-address
schemes), but these are probably exactly the scenarios where you really
want the secondary MX to be in sync with the primary.


The administrative/economic one is that there are disincentives 
for the administrators of the relay machine and those of the 
target machine to try to keep up-to-date user lists on the relay 
machine.  It requires more coordination and more resources than 
simply providing a backup MX function, and requires them on a 
regular and continuing basis, even when the target machine is 
almost always available.

Also true, but a backup MX which has a different configuration than the
primary is worse than none in many (most?) situations. The days where
"accept everything for domain X and relay it to a lower numbered MX" was
a useful backup MX configuration are unfortunately over.

It also raises some privacy issues. 
In addition, we don't have any established / standardized way to 
transfer and store the needed information (one could imagine a 
DNS-like "poll for updates" protocol, but, at least to my 
knowledge we don't have one of those.

Replicated LDAP servers is probably as close to "standardized" as you're
going to get. That stuff is quite implementation- and site-specific (for
example, we store the config in a subversion repository and both MXs do
a "svn update" every few minutes)


While there is less of it today than there was a decade ago, we 
also have a situation, especially with some rural areas and 
developing countries, where an MX record points to an 
always-online server that acts as a temporary repository or 
gateway to the distribution server for network that is only 
intermittently connected to the rest of the Internet (i.e., the 
link between the accessible server and the delivery server is 
only occasionally available and may not be accessible to the 
public Internet at all).

Yup. That is a case which should not be ignored.

For example, to say that a relay should make reasonable best efforts
(however those might or might not be defined) to verify the address
before accepting the mail but, if it cannot, may go ahead and try to
deliver it, dealing with the NDN case if necessary would seem much
more plausible to me than trying to ban NDNs ... a ban that, if taken
seriously, would take a large number of areas out of even email
connectivity with the Internet.

I fully agree with this. I think that the RFC should encourage
implementors to check as much as possible at the MX and try to reject
rather than bounce, but acknowledge that this isn't always practical.

Section 6.2 already suggests that rejects are preferrable to bounces (If
they cannot be delivered, and cannot be rejected by the SMTP server
during the SMTP transaction, they should be "bounced") but that is a bit
weak. So maybe an extra sentence or two here would help. I just can't
find a good place. Maybe just a paragraph at the end:

    Many of the problems described above can be avoided by not accepting
    a mail in the first place. An SMTP server (especially one acting as
    MX) SHOULD make every reasonable efford to determine whether the
    message will be delivered to the recipients mailbox and reject the
    the message during the SMTP transaction if that is not the case.

        hp


-- 
   _  | Peter J. Holzer    | I know I'd be respectful of a pirate 
|_|_) | Sysadmin WSR       | with an emu on his shoulder.
| |   | hjp(_at_)hjp(_dot_)at         |
__/   | http://www.hjp.at/ |    -- Sam in "Freefall"

Attachment: signature.asc
Description: Digital signature