We have a way to count DISCUSS positions, but we do not have a way to
figure out what percentage of them are perceived as "late surprises"
by the community. So, while we are taking action in an attempt to
make things better, we do not have a way to measure our success or
failure beyond community perception. Suggestions on making this more
objective and less subjective are greatly appreciated.
Russ
I agree that it might be hard to measure the effect of particular actions (e.g.
changes in
procedures). However, it is *not* hard to measure overall trends in
performance, and
to break these trends down between areas and types of documents.
My understanding is that the Simcoe dataset continues beyond 2001, and that
with some
relatively modest effort, a detailed analysis could be produced quantifying how
the IETF
has performed. This would give us a window into what the actual results have
been.
_______________________________________________
IETF mailing list
IETF(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf