hi Peter, all,
On Oct 17, 2013, at 9:10 PM, Peter Saint-Andre <stpeter(_at_)stpeter(_dot_)im>
wrote:
I agree that the job needs to change.
It might be helpful to talk about what could change, such as:
1. Less/no time on document reviews.
There are two broad purposes to review:
(1) quality, both technical and editorial: does the document describe a real
working thing in an interoperably implementable way -- without necessarily
considering the wider context in which it must function.
(2) consistency: does the document fit with the "Internet architecture" -- how
does it interact with other documents in the same working group, in the same
area; how does it interact with other ongoing work in other areas; how does it
interact with past work across the IETF; and most importantly, how does it
interact with what's actually deployed in each of the relevant contexts.
We already have directorates which -- with a fair amount of variability in
scope from area to area -- handle first-pass reviews of most of this. The roles
of the directorates could be significantly expanded; however: one important
aspect of this problem is that quality review is much more scalable and
parallelizes much better than consistency review. All a reviewer needs to
evaluate the quality of the document is some facility with the language (and
jargon) in which it is written and a relatively good understanding of the
problem it's trying to solve.
Consistency review is harder in that one must additionally have fairly deep and
current knowledge of essentially everything going on everywhere. Cross-area
_quality_ review can reduce this workload somewhat, at the cost of requiring
each area directorate to have qualified reviewers who are conversant in at
least one or two other areas. So these will necessarily be rarer and more
difficult, but they can be made significantly easier by more, better, and
earlier quality review -- a consistency review should only be checking
consistency, not chasing down ambiguities or errors wholly within the document
itself.
Academia uses grad students for reviews. One, they're cheap, two, it's also a
way for the grad students to learn. The hope (and it is a hope) is that for any
given paper there is a diversity of expertise in the multiple reviews such that
a review written by someone hopelessly unqualified doesn't result in a bad
decision. We don't want to rely on hope for document quality, but we can learn
something from the diversity side of this arrangement: if we have more, earlier
reviews, focused on quality aspects more than consistency aspects, then the
AD's review of the document can (1) draw from the outputs of more diverse
earlier reviews and (2) benefit from a document of higher quality. A side
benefit is that reviewers who do cross-area reviews get more visibility into
the workings of other areas.
It seems that tool support could be useful here -- a document could be
associated in the datatracker with a set of reviewers (either appointed by the
ADs, directorates, or WG chairs, or volunteer reviewers), and those reviews
tracked along with the document throughout its lifecycle, along with profiles
of the reviewers (including other documents they've reviewed and authored) from
which their areas of expertise can be deduced. Doing this would allow us to
track which areas have already reviewed; reviewers who have only looked at
certain sections of the document in detail could indicate so, giving both an
area-coverage and text-coverage view of where more work is needed.
To the extent that these reviews require human coordination and oversight
(i.e., someone to ratify a review in the datatracker as of sufficient quality
itself), I'd see this as primarily the job of the document shepherd.
It also needs more reviewers, but we've got a couple of thousand volunteers.
This asks for a bit of a culture shift, to be slightly more formal about how
reviews are done, but I think it would go a long way to making the AD position
a viable choice for more people.
Cheers,
Brian
signature.asc
Description: Message signed with OpenPGP using GPGMail