On Fri Mar 24 17:47:04 2006, Keith Moore wrote:
I think that Dave's message reflects a common frustration in IETF
that we talk a lot about particular problems and never seem to do
anything about them. When people express that frustration, they
often seem to think that the solution to this frustration is to do
something rather than just talk about it. In other words, they prefer
experimentation to analysis. I share the frustration, but have some
doubts about the solution.
What you're saying, I've also heard people complain about in reverse.
In other words, there are working groups where a substantial number
of people involved in the discussion are not only not going to be
implementing the proposals, but don't actually do any kind of
implementation within the "sphere" - we're talking about people
discussing the precise semantics of some HTTP extension who aren't
involved in doing any webserver related programming, or some people
discussing an email issue who limit their interaction with email to
having an email address.
I don' t have a problem with that. IMHO we tend to design
with too little regard for the needs of end users, and we need more
input from knowledgable users, rather than less.
Or, if you prefer, people are talking and not doing the "running
code" bit.
It may be that we place too much emphasis on running code in IETF today.
In ARPAnet days, when the user community was small and homogeneous but
platforms were diverse (different word/character sizes, different
character sets, different limitations of operating systems and
networking hardware), and goals for protocols were modest, merely being
able to implement a protocol across different platforms was one of the
biggest barriers to adoption. In that environment, being able to
demonstrate running code on multiple platforms was nearly sufficient to
demonstrate the viability of a protocol. Besides, since the net was
small, it wasn't terribly hard to make changes should they be found to
be necessary.
These days running code serves as proof-of-concept and also as a way to
validate the specification. It doesn't say anything about the quality
of the design - not efficiency, nor usability, nor scalability, nor
security. etc.
What really bothers me is the apparent popularity of a mindset, in a
group of people that claims to be doing engineering, that we
should just try something without really thinking about it, and
without a good way to evaluate the experiment objectively.
Now, wait - I agree up to a point.
Yes, we need to carefully analyze what we're doing, because
experimentation won't easily show if a proposed solution will
actually scale to the level we need, is secure enough, and is
flexible enough to cope with future demands that we've not thought
of. This much is, hopefully, not up for debate.
But there's a really simple experiment that's easy to do, and results
in a useful, concrete result. The hypothesis to test is "does it
actually work", the experiment is "suck it and see", and the result
is, one hopes, "yeah, I did this", with an optional "but this bit was
tricky" that we can feed back into the design process.
Unless that experiment is done, we aren't engineers, we're
philosophers.
I agree that those kinds of experiments can be quite valuable, though
I'm having a hard time remembering when such an experiment was
indicated in an IETF WG that I've been involved in.
I have seen several kinds of experiments of the form "let's see what
happens if we do this nonstandard thing with SMTP - will existing
servers handle it?" and I've generally regarded those experiments as
invalid because they tend to lack any analysis of the sample space or
any attempt to get a representative sample. They can prove that
something doesn't work, but rarely can they demonstrate that something
does work reliably in the wild. (OTOH if you know reliably that there
are only a few implementations of a protocol, such experiments might be
more valuable.)
The fundamental assumption of engineering is that you can make
better (more effective, reliable, and cost-effective) solutions to
problems if you (a) first understand what problem you are trying
to solve, and (b) analyze your proposed solutions (and choose
and/or refine them based on analysis) before building them.
We're lucky, because we work in computers, so we can actually make a
distinction between "building" and "deploying". Exchanging the word
"building" in this portion of your message for "deploying" makes me
happier with what it says. Changing "analyze" for "building", and I'm
in agreement.
I should have said "analyze...before deploying". I also believe in
building prototypes and reference implementations, but that's not a
substitute for analysis.
Keith
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf