KLENSIN(_at_)infoods(_dot_)mit(_dot_)edu (John C Klensin) writes:
[ In a nutshell, that I'm being naive. In a nutshell, I don't think so. ]
a-priori way to tell if the remote is supposed to handle "new protocol"
But there really isn't other than trying.
Trying it does not need to involve setting up and tearing down a TCP
connection, as well as an SMTP handshake or two. It could be as cheap
as (equivalent to) asking a remote nameserver a question. For example,
a UDP ping on a well-known port. This could be done at routing time.
Once the transition is deemed sufficiently completed, the test would be
removed from code and SMTP would die off.
I don't think such a scheme would run into the problems traditionally
brought up as counterarguments.
[the foibles of dictatorship]
The situation nowadays is rather different from what it was at the time
of the great DNS migration. I certainly haven't forgotten the pains I
went through in those days, but at the same time I believe the context
has changed, as follows:
The mid to end-80's were the tail-end of the heterogenous and do-it-yourself
period of system support. We're now, as far as I can tell, in a situation
where a few vendors directly supply the vast majority of the software being
used on the Internet, and therefore indirectly control what everyone else
must (at least) catch up to. I agree that the IAB decreeing it is not going
to be enough, but if in addition someone has a heart-to-heart chat with
well-placed people at about 4 or 5 vendors, you'll see a lot of changes in
a big hurry. One must phrase things in a way that'll make them see a benefit
(or avoidance of loss) to making the change. If the changes are then handed
to them on a silver platter, that leaves the usual QC to overcome (but then,
few vendors seem to do much of that in advance of shipping the product).
We have valid, conforming, SMTP servers out there that
get into deep trouble when someone hits them with eighth-bit-on
characters. If they have already gone belly-up, they are not in much of
a position to "translate".
I'm not sympathetic to non-robust code as an argument to maintain status
quo. Since the transport is 8-bit, servers which cannot survive an 8th-bit
on character are by definition non-robust.
I've been running 8-bit transparent SMTP code since the mid-80s. I've
only ever seen one problem, in a Mac server which happened to use 255b10
as an EOF indicator; that was a bug in their TCP socket library... Perhaps
the magnitude of the problem is somewhat overstated (despite the 7-bit-ness
of most mail)?
Of course a "new protocol", with a-priori knowledge of who runs it, wouldn't
have this problem.
The idea is a non-starter, really.
Only because most people are convinced it is.
ps: notice the purposeful pause in the discussion. Since it is very doubtful
anyone will change their mind, perhaps we could move further retorts to