On Thu, Mar 28, 2002 at 03:31:05PM -0500, John Stracke wrote:
John Stracke wrote:
And the authors do caution that their numbers are blind to the quality
of
the RFCs. Their point, though, is that looking at the easy metrics is
better than not measuring anything at all;
Wrong information is worse than no information. If the results don't
mean anything,
They don't mean *much*, but I wouldn't say they mean *nothing*.
why measure?
As a research effort. The current draft admits that the results are not
directly useful. But we'll never get techniques that do give useful
results unless somebody starts trying.
An interesting problem with measuring, is that you tend to get what you measure
for. If the purpose of the IETF is to push out large quantities of RFCs, then
this is of
course a great metric. If the purpose of the IETF is to push out a small
number of
high quality drafts, this is not the metric to use.
I am reminded that early in my career I was in a company that was driven by the
KLOC metric. They had determined that the product would have 150ish KLOC in it
and so had every programmer report the number of KLOC they had contributed that
week.
One week I was looking through the code I had inherited and realized that I had
two
copies of a set of utilities that did the same code. I spent a day or two
removing
one set, and porting that half of code to use the other set of utilities
(Basically
I had inherited two developers code). Well my KLOC for the week was somewhere
in
the -10 range, and it was a month before I started going positive again. My
reviews
sucked, but it was the right thing to do.
Becareful what you measure, because that is the behaviour you will get
Bill