I don't quite understand why the X- headers or other similar X-
stuff must be successful.
Think for a moment about what you're saying. You're saying that it
doesn't matter if a particular feature of a specification has caused
interoperability problems in other contexts and is known to have
caused severe interoperability problems in the present context.
Misuse of any protocol feature can cause interoperability problems.
That is at least as true for X- fields (which IMHO should never have
been put in shipping products except perhaps as write-only fields) as
for non X- fields (which should never be in shipping products without
The burden for those who want X- fields to be treated just as any other
field is to convincingly argue that there would be greater overall
interoperability by doing that. Making it easier for people to follow
the rules doesn't help if those rules don't encourage interoperability.
I'm sorry, but the entire point of this exercise is to produce
interoperable standards. The question was whether or not X- fields
have succeeded in not causing interoperability issues in other
contexts. And the answer is that in the cases where they have seen
active use they have NOT succeeded in this regard.
A better question is whether treating X- fields differently (in that
they cannot be registered) has improved the overall level of
interoperability over what it would have been to not have them treated
differently. I don't think we can answer this with a great deal of
confidence, since we don't have a control. At one time it was widely
believed that providing a special, reserved space of protocol fields
would discourage pollution of the normal space. Personally I'm glad
that we encouraged people to use X- because I think that if those fields
didn't use X- there would be a lot more bogus and poorly-defined
fields out there and it would be more difficult to distinguish those
fields from fields that should be implemented than it is now. Which is
not to say that there aren't bogus non-X- fields out there, just that
X- helps identify _some_ of the bogus fields.
Most of the X- fields that have been deployed shouldn't have been
deployed in shipping product with or without the X-. Furthermore,
there has never been a prohibition on defining new fields without X-.
So I have a hard time understanding how writing new RFCs that change the
rules for X- helps interoperability, when the real issue is the failure
to define the fields in a way that allows them to interoperate. More
generally, when the problem is due to the failure to read existing RFCs,
I fail to see how writing new RFCs is going to help.
(Granted, it's easier to get people to follow rules that say "anything
goes", because they don't have to know the rules to follow them. But
while "anything goes" might result in more widespread compliance, it
doesn't necessarily result in greater interoperability.)
Aren't these simply meant for 'private', experimental
and test use? If I want to test out a new idea, if I want _my_
mailing system A and _my_ mailing system B and _my friends_ mailing
system C to do something special, if I want to flag certain messages
for special handling in _my_ environment, then I'll use X- headers.
All this may be highly successful for me, but not for the world at
The problem is that often as not such experiments do not stay
confined, and instead deploy more widely.
But at least they're labelled as experimental. (or more precisely,
they're labelled as "user-defined" fields that cannot be registered.)
Even though I believe X- fields should not be used in shipping products,
labelling an unvetted field with X- seems to be at least slightly more
responsible than not labelling it in that way.
If some header-based feature or interoperation should work on a
large scale, not just in the environment I control, then I will need
to define and describe a non-X- header for it.
Which doesn't happen in practice.
And it still won't happen if the treatment of X- fields is changed.
The X- prefix only gives me
assurance that my experimental or internal stuff is not clashing
with any currently defined, widely used header, now or in the
future. The X- header will not even guarantee that it will work
through externally controlled gateways, even though most of the time
No such guarantee is present in practice, since any number of
X- fields are now in wide use.
It is certainly true that some X- fields are in wide use. Or more
generally, some widely deployed implementations of mail tools are paying
attention to unregistered fields, some of which begin with X-.
Anybody who uses X- headers or tags and expects large scale
interoperation is simply naive. I can easily make a rule that in my
huge private garden cars are to drive backwards. If I want them to
do the same on public roads, well, good luck.
On the contrary, the person who is being naive here is you.
No, he's being realistic, and more folks need to recognize this.
More specifically, anyone who expects fields that aren't well defined
and publically vetted to interoperate on a wide scale is being naive.
It might happen by accident, but not in general.
Use of X- fields is just a special case of the above.
expecting people to play by the rules when the rules require them to
behave in seriously inconvenient ways. We have ample evidence this is
not how things play out. The trick is to write rules that people will
Indeed. It seems that there's no getting around these two things:
1. You don't tend to get widespread interoperability without careful
design that takes into account a wide range of input (because conditions
vary widely and no single person or group appreciates the breadth of
2. Getting that wide range of input (typically by public review and
vetting) is "seriously inconvenient" - at least, in comparison to
how much effort it takes to type
printf ("field-name: ...\r\n");
Frankly I don't think we're going to improve interoperability without
somehow encouraging more people to do things that are inherently
"seriously inconvenient". Maybe we can make them more convenient, but
it will never be anywhere nearly as easy as typing printf.
I like the comparison between X- and 192.168.0.x. I can create a
huge internal network with 192.168.0.x IP addresses, I can even
tunnel the network over the public internet. This can all work
perfectly well. But I can not expect it to work outside of my
private network, neither is it accessible from the internet as is,
nor is it guaranteed that it can work with another private network
using the same address range.
First, its not a valid comparison since such addresses are unroutable
and hence are unusable on a wide scale.
A better analogy would be to the new kinds of addressing being proposed
for IPv6, which _are_ globally unique (at least with high probability)
_aren't_ publically routable (since they aren't aggregatable)
but _can_ still be distinguished from publically routable addresses.
(I call them GUPIs - globally-unique provider-independent addresses,
but this name hasn't caught on.) One of the arguments that have been
made in favor of deprecating site-locals in favor of GUPIs is that,
while both kinds of addresses would leak, the GUPIs would at least be
recognizable as "addresses that don't belong here" (say in routing
advertisements) and thus you can reliably distinguish them from
legitimate traffic and (at least for registered GUPI prefixes)
you might also be able to tell where the traffic is coming from.
But this is where the analogy falls apart, because for the case of
non-local GUPIs we might want to filter them, for the case of unknown
extension headers we might want to implement that feature rather than
I really think that the X- debate is a red herring. The reason we
seem to always be at an impasse on the X- debate is that we're
dancing around the real problem, which really has not much at all
to do with X- . The problem, I suspect, is something like this:
In the marketplace, vendors distinguish between each other on features,
and some features require protocol extensions. Standardizing those
protocol extensions before deployment defeats the vendor's purpose for
deploying those extensions, which was to gain some (however limited in
time) edge over a competitor. So it's not going to happen very often.
But when such features are deployed they may not interoperate well with
other vendors' products because of inadequate design and/or lack of a
well-written published specification, and they may actually degrade
interoperability of the email service in general.
So the question for those who want to promote interoperability is:
How do we encourage experimentation with, and deployment of, new
features in email without degrading interoperability?