--On Wednesday, 01 August, 2007 09:03 -0700 "David W. Hankins"
and have both of those working seamlessly no later than Sunday
afternoon of the meeting.
If we can't do that, we should be very seriously reviewing our
protocols and specifications: that sort of thing shouldn't be,
in any sense, an experiment at this stage.
Wow! Is that an "IETF First" for anyone else?
Ever since my first IETF, I was well aware that many folks
held to the unfortunate fallacy that, "because we do X at IETF
meetings and it works allright, it is therefore sufficient for
the rest of the Internet." No matter how much people point
out the error in this thinking, it is perpetuated...as
recently as the IETF 69 tech plenary, where we were told that
firewalls were becoming obsolete, evidenced by their lack of
use at IETF meetings.
There's only one word for it: Astounding.
I have never, until now, heard the contrary fallacy attempted.
That is, "because we did X at an IETF meeting and it did not
work allright, it is therefore insufficient for the Internet."
That's a new one on me.
I think you misunderstand my comment, or at least its intent.
When we hold a meeting, we make choices --explicitly or
implicitly-- about the protocols we are going to support on the
network for the meeting. We can reasonably decide that we do
(or do not) want to support IPv6, FTP, or the FooBar protocol.
Reasonable people may disagree with those choices.
However, whatever the choices are, if we cannot have a network
that runs seamlessly at an IETF meeting with the choices we
make, then there is a problem. That problem may be about
operations, choices of hardware or software, the protocols
involved or how they are specified, or something else. Whatever
the source of the problem, I believe we owe it to ourselves --
and to the community we are trying to convince to use our
standards-- to get it diagnosed, fixed, and reported on in
sufficient detail that neither we, nor anyone else, need to make
the mistakes again.
If even a small part of the problem turns out to be that we have
developed a protocol that is either sufficiently fragile, or
sufficiently complex and option-laden, or specified in a way
that good-faith vendor implementations don't interoperate with
very high reliability, then we have a problem that I believe we
are obligated to address. If we have somehow ended up with
protocols that will not run, in combination, in a satisfactory
way over a less-than-perfect network, then we ought to revisit
those protocols to see if they can be made more robust.
We claim to develop protocols for the public Internet, with all
of the strange and sometimes-locally-unpredictable behavior the
public Internet can exhibit. We often observe that it is much
easier to develop a protocol (or a product) for a
perfectly-predictable LAN environment in which packets never get
lost and always arrive in a timely fashion, in order and without
unexpected fragmentation; on which hostile attacks never occur;
and so on. It has traditionally been one of our norms that, if
protocols come along that work acceptably only on such LANs, we
either don't touch them in the IETF or carefully and publicly
document their limitations.
To me, this is what having a standards body that focuses on
interoperability and things that actually work in the real, and
nasty, world is all about. There are certainly bodies,
companies, and consortia out there who develop and aggressive
market protocols and products while using competing ones
internally because they know, at some level, that the products
or protocols they are pushing just don't work well enough. We
have historically not done that and I hope we don't ever go
This is also just another version of the "eat our own dogfood"
story: if we don't find the dogfood palatable --whether because
of its basic specification or its formulation or packaging in
practice-- then we need to do something about it.
Clever, but wrong: networks much larger than 1,200 laptops use
DHCPv4 on a daily basis all over the Internet without similar
I know that. I've also got some hypotheses as to why we have
problems and they don't, but my hypotheses aren't backed by
solid data and analysis and hence aren't worth much. So do you
have an explanation for the repeated IETF problems? And, if
not, are you willing to join me in suggesting it is about time
the IETF gets to the bottom of these problems, gets the finger
pointed in an appropriate direction, and gets the problem or
Or do you still think we disagree or that my comments are
Ietf mailing list