Greg Skinner wrote:
. . .
I don't feel comfortable with the notion that the work of a WG should be
judged according to adoption of its protocols, particularly in terms of
traffic generated. All protocols are not equal; some have limited
utility by design, as they serve a limited community.
Hmmm, sounds a tad like justifying failure to me.
I wasn't trying to imply that a protocol that consumed *more* bandwidth
was better than another intended to solve the same problem that consumed
*less* bandwidth. I was trying to suggest that if two groups were trying
to solve, say, the Music Service problem, and the IETF team has
developed a protocol called "ElDapo", and another group has developed a
protocol they call "the protocol formally known as 500" (pronounced
<make funny sign in the air with your hand>), then the fact that ElDapo
handled a larger percentage of the queries being made for Britney Speers
songs than <make funny sign in the air with your hand> would suggest
that ElDapo is the more successful protocol. We don't want ElDapo to
swamp our network, but we want the IETF-developed protocol to be the one
solving the problem, no? If others are providing the solutions then it
is perhaps not presumptuous to conclude that the IETF is not being as
effective in its stated mission as the other team, no?
... I also don't think
that time spent in pre-use is necessarily overhead; this diminishes the
value of producing clear documentation. If it takes longer than some
might desire for RFCs to be published, but the overall clarity of the RFCs
is improved (regardless of their utility), I think that's time well spent.
Yes, but I presume that the metric to measure success is not whether we,
as engineers, all give the documentation a 6.0 for both presentation and
technical merit. Effective communication is perhaps a *necessary*
condition to make deployment possible but it is not a *sufficient*
condition to judge success of the exercise. Getting bits over a wire is
the problem. My metric would be to judge the percentage of those bits
that travel using IETF-defined protocols. Everything else seems to be
context, not core, as the business analysts like to call it...
So, if people agree that traffic measurements have value as a metric,
then presumably the first derivative of traffic volume over time is also
a reasonable indicator of the future takeup rate for new protocols.
Measuring some of the other things being discussed here (RFC counts,
engineer-hours spent in meetings, messages to a mailing list, count of
pastries consumed) would all seem to me to be measuring overhead
activities, not core to the organization. This is not a bad thing to
understand, but would not seem to be the most important metric in our
But past performance is not always an indicator of future performance.
Take the first couple of years of HTTP traffic, for example.
Errr, okay. As I recall, HTTP exploded out of the gate and doubled
repeatedly until it was noticed as a major consumer of bandwidth. It
took a while for that doubling to get to Darwinianly significant byte
counts, but as I recall it was regarded as a sucessful protocol almost
from day one. Maybe your point is just that it took a couple of years
before it swamped the competition and reigned supreme in its class?
Even if it takes a number of years for a piece of work to find its
niche, I don't see that this invalidates my point, which is that if
you're going to measure something about the IETF, you should focus on
protocol takeup compared to other people's protocols, since that's
presumably what we want to see happen once the work is done. That's what
I meant about the first derivative - measuring the rate of change of
traffic for a protocol is useful in the early days, measuring the
percentage of traffic for a protocol compared to similar protocols is
useful in the steady state for mature protocols.
I recall having conversations like this with the OSI guys working on
X.500 in the very early '90s. They kept telling us stuff like "we're not
ready for users yet", and "we need to finish the engineering before we
can expect significant deployment" and so on. They had what seemed like
huge lists of documents that they were taking through their working
groups, iterating and reiterating through meeting after meeting.
Meanwhile, the world just passed them by. So excuse me if I get a little
cynical in my old age when people want to downplay the significance of
I understand the value of stopping bad ideas. I understand that
sometimes *nothing* actually is better than *something*, but frankly to
me the bottom line measurement of success has to be "is it being used to
solve a problem?" If not, why bother?
. . .
As a simple test, if we were to find that the percentage of traffic on
the net using IETF developed/endorsed protocols turns out to be falling,
it would imply that the organization's influence is waning, which would
be something we might want to investigate.
This need not necessarily be considered a failure of the IETF. It might
be an indication of the maturity of the IETF, in that other standards
bodies/companies/users can use IETF protocols/services/BCPs as a foundation
for whatever it is they're trying to do.
Your mileage may vary, etc but if people are taking the IETF work and
not growing it in the IETF, I personally conclude that the IETF is
failing to provide a suitable home for new ideas. It's supposed to be
*the* place where open standards protocols are developed in a
vendor-neutral, intellectually honest forum. If people find they can't
get their work done here, and elect to do the work elsewhere, sounds
like failure of *something* to me. Of course, it *does* solve the
overcrowding problem, so if you want to measure success by the ability
to get a cookie in the corridor, this would be a good thing... ;-)
Peter Deutsch peterd(_at_)gydig(_dot_)com
"This, my friend, is a pint."
"It comes in pints?!? I'm getting one!!"
- Lord of the Rings