John Klensin writes:
Let me take off from part of Ned's argument and suggest that we may be
in much better shape than it might appear by doing a little failure
It seems to me that the concept that, if you can't present something
(video on teletypes, etc), well, you can't present it, is key to lots of
this and something that we have been periodically losing sight of. It
implies, for example, that, if I send a sound-and-light show off to a
general purpose mailing list, lots of the messages are not going to get
properly delivered and presented, and lots of the recipients are going
to get irritated (with me, or with the list, or with someone else).
Well, I'd certainly get irritated if the messages were not properly delivered.
RFC-XXXX is supposed to guarantee me that, at least.
Presentation, on the other hand, is my problem; if my UA cannot cope, I may be
able to find some equipment that can somewhere else.
that is independent of either transport issues or message format--
nothing we can do short of buying *everyone* a multimedia terminal is
going to solve it.
If the material is important enough for me to display and I have no other
alternative this may well be the only option. But there's a spectrum of
(1) Send the poster a suggestion to use a content-type more amenable to what I
might think represents the readers of the list (or posting to the list if
I happen to think that peer-pressure is the right approach). On the
other hand, if the mailing list is discussing NeXt audio, I would think
that posting audio is a very reasonable thing to do and I would expect
the readers and posters to defend the right to post audio very vigorously.
(2) Decide that the message warrants borrowing equipment to read it.
(3) Decide that the message warrants buying equipment to read it.
(4) Decide that the message is junk -- just delete it.
(5) Decide that my equipment is crap and I should renew my proposals to
upgrade it ;-)
Any or all of these may or may not be appropriate. But compare all this to
the similar scenario of someone posting in a foreign language few of the
readers know. Is there really that much difference?
Now, what happens when that message --again ignoring both format and
transport issues for the moment-- arrives at the UA or whatever tries to
manage presention to the user?
I happen to apply many of the concerns you have expressed about MTAs to UAs. I
don't want a UA second-guessing what equipment I may or may not have available
(or that I've bothered to tell it about). At the very least I want the damn
bits there so I can do something with them, with or without the UAs
cooperation. I don't get upset at limited-function UAs, I get upset at
limited-function UAs that push their view of the world off onto me and don't
let me get around it.
I use a VAXstation. It does not support audio. But there's a NeXt system about
five feet from my VAXstation. Routing mail (even using the NeXt's squirrelly
multipart format) to it so I can display the audio is totally do-able for me.
But I have no idea how I'd tell my UA that this is possible (assuming I have
one that is extended in this fashion). I'm not sure that there is a language
general enough to describe all possible scenarios (if there's one that would
write the grant proposal requesting new equipment for me automatically, I'd
like to have a copy!).
It cannot be delivered in the fashion we
have normally thought of as "delivery".
I separate delivery and presentation totally. Maybe it is because I'm used to a
UA that makes this distinction (mine says "I cannot deal with this stuff, but I
can put it in a file for you if you like"), but I'm not too sure about that --
I think any knowledgeable user will want this option. And it will not hurt the
novice, whereas bouncing the message because of inadequate reception facilities
may well be very damaging. For example, some formats, like DEC's CDA, have a
"worst case" option that allows just the text part of a document to display on
_any_ hardware in a semi-reasonable manner... dropping an CDA bodypart because
I don't have a bitmapped display is antisocial. And this is not a fanciful
thing to consider, since ODA<-->CDA converters exist, and we are definitely
going to see ODA bodyparts in messages in the near future.
Maybe we bounce it at the UA
I'll bounce it manually, or program my delivery agent to bounce it for me,
thanks. I don't want the system doing this. Keep your hands off my mail!
Maybe we display all that nice transport-encoded gobblygook on
the teletype and pretend that we have delivered it.
An unextended UA will do this. Of course I expect to see simple external
applications to unbundle multipart messages; anyone can avail themselves of
such technology if they choose. (But nested encodings will seriously challenge
the ability to deliver this technology.)
Maybe we encourage
the user to write it to a file in the hope that, someday, the tooth
fairy will donate a multimedia terminal and it will be possible to
display the thing.
Other scenarios are possible. See above.
Those who believe in warning messages might send one
out in these latter cases that would contain a translation of "delivered
to user, but not presented; might never be".
I want control over this. Frankly, I don't necessarily want people to know
what sort of facilities I use (OK, I admit it, I'm typing this at home on a
VT100 while watching Die Hard II for the Nth time on HBO; so shoot me).
Maybe we do need some norms for what to do in these cases, although,
traditionally, the Internet--designed for a kinder and gentler world in
which email was used to carry only those things that we could presume
everyone could read--has avoided discussing them. Let's consider that
part of a separate discussion, at least temporarily.
I agree that there is a need. A user's UA bill-of-rights would be nice. But I'd
start it the same way the Hippocratic oath starts -- first, do no harm.
For example, if I were writing specs for an RFC-XXXX-capable UA, I'd
want to include a verb that would permit a *user* to force bouncing of a
particular message, with text that said, e.g., "color video cannot be
interpreted at this site".
I would not consider such an option mandatory. I'd consider doing something
like this for a user without his/her consent to be antisocial.
And I'd like to see that go back with all of
the "error message/bounce" apparatus, including null envelope return
Yup. It needs standardization, like all error messages in e-mail do. This is,
of course, a separate issue.
[As an aside, for multipart messages, this implies a sort of line-item
veto, and one of the holes I see in RFC-XXXX is that the issues
associated with "I can/will accept part of that message, but not all of
it" have not been worked out at all.
The line-item veto analogy is an excellent one. I might want to veto on
the grounds that a message contains a single part that's offensive to me,
or I might want to veto only if the message contains no material that's
useful, or ...
This at least is not hard to describe. A language to describe it not hard to
come up with.
The more we want MTAs to
"understand" RFC-XXXX and be able to fuss with it in transit, the more
important it is to start specifying the correct practices. ]
Any MTA that drops a message because of some preconcieved notion of what I
want or do not want is _broken_. An MTA can bounce a message if there's a
tranport-related problem with sending it on (like, for instance, no 8-bit path
available). But I don't like and strongly object to the X.400 model in this
situation. I dislike all the conversion, conversion-with-loss, and
conversion-prohibited stuff, but what I REALLY object to is the fact that this
is all at the option of the SENDER, not the receiver. Once someone sends me a
message, it is mine, not theirs. Message receivers have damn-little control
over what they can do about mail. I don't want to restrict it further.
The point is that there are some things are are not going to work
(interoperate) and, again, no amount of diddling with protocols or RFCs
is going to change that.
OK, let's expand that reasoning a little bit. Picking up from Mel's
argument, if I try to open an 8bit connection to something that can't
accept 8bit transport, it is going to reject the thing. Period. No way
to change that as long as we insist on verb-negotiation (and the
alternative is much worse). Again, no amount of protocol-writing will
fix that unless we can mandate an instant complete conversion, of which
there is no chance. As an extension of this, if we try to open an 8bit
connection to an intermediate MTA such as a mail exchanger, no rule that
we can write and hope to "enforce" can prevent it from saying "I'm
ultimately acting as the agent of the 7-bit-only mail server whose name
is in the RCPT TO address; if you tried to open an 8bit connection to
it, it would reject; so I am going to do so on its behalf". We can try
(possibly without success) to force that behavior on the intermediate,
but trying to prohibit it is hopeless.
I don't want to prohibit it. I explicitly want to allow it.
Interestingly enough, in the direct virtual connection between
originator and recipient MTA case, it doesn't make any difference
whether the recipient machine understands RFC-XXXX (or any other
extended message format): if it can't accept 8bit connections and hasn't
implemented the SMTP extensions, it is going to generate a fatal error
code in SMTP negotations and that is the end of the story. Finis.
Problem the originator--typically, in the real world, the originating
*user*, not some agent --is going to need to deal with.
That is all reality. It is not a very nice reality, but it is,
again, independent of any diddling we do with the protocols. And I
suggest that, if something cannot be delivered and presented, no matter
what, it doesn't make a lot of difference where in the system it gets
stopped. I am not trying to build a case or model for system X knowing
the capabilities available to user A on system Y. I am only suggesting
that, if it does know, no amount of RFC-writing is going to prevent it
from taking advantage of that information. As someone (maybe, you,
Mark) pointed out some days back, sending (or forwarding) mail that you
know is going to be undeliverable is pointless, if not plain dumb.
OK, so what is all the fuss about? It is about one major case,
perhaps a second.
(1c) The intermediary decides to accept the message, but to convert it
into an network-acceptable 7bit form to pass it on. The more complex we
make the rules about the conversion it is expected to make, the more
likely it is to just reject instead. The more complex we make the rules
about what the final delivery system (I make no MTA-UA distinction here)
is expected to decode in this circumstance, the more likely *it* is to
just reject instead. This has the great potential for being, not a
zero-sum game between MTA and UA responsibilities, but an "everyone
loses and the message ends up back in the hands of the user who was
naive enough to send it" story. Probably there is a way out, but I
haven't seen it yet.
I wish it was this simple. The problem is that the choices we make here
have profound impact elsewhere. What's the use of adopting a minimal
solution here when the result is that you force the zero-sum game onto the
UAs of the world?
That's why I favor making the encoding the MTA must use slightly more
complex (and I have argued elsewhere that the increase is complexity is
not as large as you think it is) so we can save immense trouble elsewhere.
I agree with all your goals of rigidly defining this whole process, but I
think something of a compromise here is essential.
[Aside 3: as Bob Smart has regularly pointed out, destination sites
can mostly avoid case 1c from arising by an intelligent choice of MX
hosts and preferences. It might only arise if a user explicitly
(source, percent, bang, or otherwise) routes something to a target host
via a pathological path or intermediary. Such users deserve whatever
they get. ]
I agree with Bob on this. Of course, I have to consider that only see the
pathologies -- from my point of view, making sure that they can operate
properly is well worth my time.
Now, if Mark, for example, decides to not implement nested decodings
(we can't make him, and probably don't want to try) then this mail is
going to bounce. If the intermediary doesn't think the deep format
understanding, rather than whole-message encapsulation, is worth the
trouble, and we require the deep format understanding, then *it* may
reject the message.
I repeat that the difference here is not as large as you may think it
So, regardless of what we do, if there is going to be 8bit transport
around, some messages are going to be rejected and/or bounce sometimes.
And, because they could always find themselves directly connected to an
"old" or "7bit only" host, every originating 8bit SMTP must be able to
deal with rejections of 8bit transport. I wonder how much energy cases
2b and maybe 1c are worth? Are people willing to trade them for this
whole enterprise, if it comes to that?
This is the argument that led to the expansion of goals at the Atlanta
meeting. I don't have an opinion on this since I'll support the stuff
either way, because I need RFC-XXXX and I need it to work properly for me.
Ned asks two questions (or I'm inferring two from his remarks) which
I'd like to try to address, since I am feeling slightly less disgusted
(1) Are all of the problems in the SMTP extensions?
Well, first of all, the problems are in the 8->7 conversion issues.
There may be some additional problems in "binary" handling. They are not
in the 8bit transport of text, per se, at all.
Lots of the problems are in the conversion issues, but not all.
RFC-XXXX raises, and has not yet addressed, issues all by itself having
to do with, e.g., un-presentability of messages and message parts and
what should be done about it.
RFC-XXXX does not address this because it is not its place to address it.
First, I think there's much practical experience needed before we can
figure out in full what is reasonable. There are precedents for this, of
course. X.400 allows return of message contents, or non-return of contents,
and chose to let implementations decide if this this a feature to implement
or not (I won't bore everyone with this discussion; it is not relevant here).
The point is that it is possible, and even disireable, to design and
implement some things without working out all the end-to-end ramifications.
If there are ramifications other than at the source and destination, we need to
deal with them now. But we're far too experience-poor to start designing the
details of the UA of the future right now. And we'll never get the experience
we need to do such a design unless we reach closure here on these issues.
If we take the attitude that network
responsibility ends end the bits arrive at the UA, then we immediately
escalate the need for delivery-to-presentation-agent acknowledgement,
since we can no longer make the claim that the network is sufficiently
reliable that silence (no rejection) is an adequate indication of
delivery to a place from which the user can read it.
But all of these areas are outside of our present scope, and need additional
study and work. The pleasing thing about RFC-XXXX is that it opens up
all these concerns -- I view this as a challenge, not as a liability.
It also contains forms and definitions that are just fine as a model
for how to extend things beyond text, but are really very experimental.
If I were the standardizing authority around here, I'd argue that
RFC-XXXX has three parts:
I agree with this analysis of what's there. The point, however, is that we need
to at least define the placeholders for future work now, so that intelligent
decisions can be made about where things may need expansion in the future. And,
of course, we need enough stuff in the standard to generate some non-trivial
operational systems (an Internet requirement that I happen to like).
Now, if we did things that way, we could design the UAs and the 8->7
mechanisms with a clear idea of what we were handling as well as some
insight into what might be coming. The people who care most about 8bit
transport and conversion could implement and experiment with converters
for the text and multipart-text cases. By the time high-resolution
moving pictures with sound emerged from "experimental", we might have
enough of an idea about how widely 8bit transport was going to be
available to write either "you must transport this over 8bit" or "you
must transport this only over 7bit" rules for those forms, which would
simplify things considerably.
This is exactly the point. And I think this is exactly what RFC-XXXX does.
Are you seriously going to claim that the fact that you can specify extra
headers in messages (right now at the outermost level, later in inner levels)
is a "scoping gap you can drive a truck through"? Sure, you can specify
I don't know what Mark was trying to claim, but I would suggest that
the soft underbelly of RFC-XXXX is 822 itself, which is just too
permissive about how extensions can go in. You say that "Even X.400
has a mechanism for specification of optional stuff...", which is
exactly correct. But, while X.400 "has a mechanism", 822 has proven, in
practice, to have a declaration of "open season".
I think, in the final analysis, that something that has only limited
extensibility is the same as being a little pregnant. Something either is
extensible or it is not.
Look at SMTP. You're going to close the hole that prevented extensions in SMTP
now -- you're going to declare implementations that close the connection on an
unimplemented command broken. This is fine, but it now means that SMTP is just
as extensible as RFC822! I can implement a new command: NED, which if accepted
I've decided means I'm going to send X.400 1984 P2 bodyparts directly across
the SMTP channel. But there's some other Ned somewhere else that implements his
own NED command, and it transmits X.400 1988 P2 bodyparts directly. Whoops --
big trouble -- the formats are similar enough that peculiar things can now
happen (actually, this is not possible, at least not as I've stated it, but I
think the point is made).
X.400 has similar extensibility, and all the problems that go along with it.
The big mistake RFC822 made, in my opinion, is not in the RFC at all, it is the
fact that extensions were ignored and not documented until now. This is what
has led to the present state of affairs, not the fact that it was extensible.
Let's not make the same mistake with RFC821, OK? Provide a mechanism for
registering extensions that anyone can use, and declare that the ONLY wrong
thing you can do is use an extension without registering it. (The TICK and
VERB commands used by BITNET BSMTP better be the next thing registered, right
after the SMTP extensions are done.)
You can't possibly know that a message format is RFC-XXXX unless you
change the envelope and negotiate that. Otherwise, it is just
heuristics, statements about likelihood, and, ultimately, a leap of
faith. In retrospect, 822 should have required registration of every
field name that didn't start with "X-". It didn't, and we have no way
to guess what is out there.
It depends on what you mean by registration. If you mean anyone can take out a
claim to a header without having to fight through this process, then fine. But
a cure that meant fighting through this process for each header would have been
much, much worse than the present state of affairs.
DEC is the only organization I know of that got this right. System logical
names on VMS are a system-wide resource that must be managed. More to the
point, the possibility of conflicts must be elimminated. DEC set up a
registry -- you just call up and say what you're registering and why, and
you've got it. Nobody else can use your unique prefix; it is yours. The
Internet needs only the addition of an (optional) description of what the
field, or command, or whatever, should be used for.
Without them you end up in the mess SMTP is facing now. Be
grateful for the things that are good about RFC822 here!
Interesting observation. I find 821, both in theory and in practice,
much more tightly defined than 822.
Heh. I disagree totally, but it is not totally RFC821's fault. RFC822 describes
a static entity -- a message. It has lots of open-ended stuff that has been
used (and abused). But RFC822 is dealing with a fundamentally simple thing --
it describes a class of objects that just sit there and implicitly gives you
rules for knowing whether a given instance of object is in the class or not.
This is pretty easy stuff, when all is said or done. The hard part is making
the object format palatable yet extensible, and I happen to think RFC822 is a
major win in this regard. (And X.400 is such a major lose that words fail me.)
But RFC821 has a more difficult problem. It describes a dymanic situation; two
hosts interoperating and tracking state. And I claim that it does a really
lousy job of fulfilling what is needed in this area. The states of the agents
are not defined, for one thing, and more important the state transitions that
occur as actions and responses happen are totally undefined.
Example: One of the SMTP agents that gateways to microcomputer e-mail has an
interesting problem. When a message goes MAIL FROM, RCPT TO, RCPT TO, etc.,
only the LAST RCPT TO matters as far as being able to send the message goes. If
the first address is good, second address bad, the DATA command will fail,
saying no valid recipients were given. Now, this gateway is busted, in my
opinion. Try backing it up with RFC821, however. You simply cannot do it -- the
behavior is LEGAL, as far as I can tell. The HR did not fix this one either --
it is beyond a precise HR specification to fix. The states need to be
documented and the transitions between states need to be specified.
This is why I think RFC822 is a superior document. Not because it is so much
better, but because the job it undertakes is so much easier. Actually, the
RFCs are about the same in quality, in my opinion. It is just that RFC821
needs to be MUCH higher quality than RFC822 overall, and it isn't.
When I go around the network
bouncing EMAL verbs off servers they all reject it, neatly and
consistently. No exceptions so far.
Tsk tsk John. This is not an effective way of convincing me, or anyone else,
that RFC821 is a good specification. Far from it -- the fact that you're doing
this convinces me that it is _not_ an adequate specification of the protocol.
Let me tell you why I think you're finding what you're finding, from the
perspective of someone who has actually written an SMTP server from scratch.
The reason my implementation does not die on an unrecognized command is that
there's an example in the specification of what to do with an unrecognized
command. Section 4.2.1, specifically, mentions the error code:
500 Syntax error, command unrecognized
This implies that an error should be issued and the session should continue.
Mind you, there's no text or specification to back this up, so an
implementation can be compliant and not do what we now consider to be the right
thing. But when you write a server, you come to the place at the bottom of the
command match loop, switch statement, or whatever, and you have to do
something. RFC821 does not say what, but you see that error, and that's what
With 822, we have fields floating
around, none of them defined in standard or standards-track RFCs for
such things as ICBM, Phone, Organization (is that a business, or the
internal struture of the data?), Content-type, Errors-to, Warnings-to,
Fax (the number of your machine, or the format it uses?), Character-set,
Code,... All valid extensions as 822 is written; a few of them threats
to either RFC-XXXX or general email interoperability.
Sure, all this stuff is floating around. It is a threat to interoperability?
Not really. Most of it is stuff that is basically regarded as glorified
comments -- what's on the Organization: line simply does not matter -- it is
Some headers are more "active", if you can call it that. Content-Type, for
example. But there's an RFC for this one; RFC-XXXX supercedes it in a fashion
that is designed to not cause conflicts. Errors-To and Warnings-To are used in
an active manner as well; so is Return-Receipt-To. But do they cause problems?
I doubt it, because the headers that are used actively tend to be used
consistently. They're protected by the same thing that has protected notion
that an illegal SMTP command will generate an error and not shut the connection
down -- implementors are, for the most part, reasonable, and want things to
interoperate in a reasonable way.
If your faith in this whole evolutionary process allows you to cheer the
consistency of SMTP implementations for "doing the right thing" even in the
absence of clear standards for something that *really* should have been
specified, yet allows you to conclude that the message format side of things is
likely to be totally inconsistent when faced with a similar challenge, well, I
think you're being very unfair. In fact, you're being worse than unfair, you're
being inconsistent -- given your lack of trust of MTAs, I don't see how you
can have such faith in the SMTP side of things.
The only "mess" SMTP is facing now is an attempt to make an
extension/change and to make it both forward- (no problem) and backward-
compatible (old system have to see what they expect, even if newer ones
send out the new stuff). We've got backward interoperability without
problems: old clients can send to new servers, new clients can send to
old servers, new-type messages are going to bounce neatly and without
causing network or transport problems. But backward compatibility, in
which new clients can send new stuff to old servers and have the latter
receive it as old stuff, is a *very* tough criterion, one which we
rarely expect of anything.
Well, you can expect RFC822 message formats to work this way already ;-)
Seriously, you are really being very inconsistent here. You criticize RFC822
for being too extensible, too flexible, yet you bemoan the major problem SMTP
is now having with being extended.
There is a MAJOR difference between specification of some nonsensical header
that will be ignored and the introduction of headers that change the way
messages are broken down and parsed. RFC-XXXX is closed in regard to the
latter; it is open in regard to the former. The latter presents serious
gaps; the former is a harmless detail.
But 822 is not closed to any of this, and the very 822 properties
that permit RFC-XXXX to come along and say "if you are going to read
this particular message, then you need to understand these header
fields in this way" permits another RFC, or a random implementor, to do
It all hinges on how you define "read". By my definition of "read", I can deal
with anything RFC822 can give me, and as long as implementors (including us
RFC-XXXX folks) stick to the ground rules, I can "read" anything produced in
the future with old equipment. This is a really beautiful feature. Now, this is
because I view reading as simply being able to get all those nice bits without
having someone muck them up (or, more accurately, not muck with them except in
very controlled ways, some of which have crept in and now must be dealt with).
Now, if you mean "can I grok it" when you say "read", the answer is no, there
is no closure. There never will be. There used to be a group of students around
here who used a synthetic language (called, I believe, Kul). Now, as it
happens, a couple of them never seemd to be able to address mail properly, and
as a result of being postmaster I used to get a fair amount of their mail. I
usually read the subject and the first couple of lines of such messages to see
if I can figure out who it is for. I must confess that it all looked like
jabber to me; I read French well, Italian haltingly, and have dictionaries for
a dozen or so other languages right at hand, but this stuff was really
peculiar. In some cases I had no idea who it was for, and I _never_ deliver
mail to people that I'm not sure is theirs (the reasons are obvious, I hope).
I indulge in this little parable simply to point out that full closure is a
practical impossibility. We can assign a Language: Kul header to this stuff,
but that won't translate it for me! A user can encode a bunch of stuff using
the encoder of their choice and ram it into the text part of a message now;
this is not going to change. Given this ENORMOUS hole in "reading" messages, I
see any attempt to structure things so that I have at least a tiny chance of
being able to decode it as an improvement.
Yes, this is mostly theoretical rather than real, or at least
I hope it is, but I think it is quite dangerous to make robustness or
"safe extensions" claims for RFC-XXXX without doing something about the
I make the claim, you can always refute them with evidence of substantial
interoperability problems. But I think this all really boils down to a question
of what you mean by operability, or readability, or whatever. By my loose
definition I don't think there's anything in RFC-XXXX that is dangerous. I can
decode the base64 encodings by hand if necessary! By your tighter definition I
think the cat got out long ago -- something to do with the tower of Babel, I
I will also point out (before Mark does) that nested encodings are a SERIOUS
threat to interoperability. If you want to eliminate the biggest hole in
RFC-XXXX that *will* hamper interoperability, you need to plug the nested
encoding hole. And once again this brings us face to face with my difficulty
with your position -- in order to simplify the MTAs you distrust, you're
willing to admit a capability that may well ruin things elsewhere. This is a
"baby with the bathwater" position from my point of view.