ietf
[Top] [All Lists]

Re: Purpose of IESG Review

2013-04-12 07:23:17

On 4/12/2013 12:13 AM, Brian E Carpenter wrote:
Seeing randomly selected drafts as a Gen-ART reviewer, I can
say that serious defects quite often survive WG review and
sometimes survive IETF Last Call review, so the final review
by the IESG does serve a purpose.


Brian,

Of course it "serves" a purpose. The lack of perfection in the specifications that reach the IESG has always been the justification for the current review model.

But what is tiring about this line of justification is its continuing failure to balance the analysis by looking at the costs and problems that come with it. Does it provide regular and sufficient benefit to justify its considerable costs?

First, it is a phenomenally expensive step, with a very damaging organizational cost: the time and effort burden put on ADs makes the pool of available ADs tiny, including dramatically reducing the range of companies that can afford to donate senior (strategic) staff to it.

Along with this is the general issue that has triggered Joe's query: the tendency of ADs to inject spontaneous, late-stage personal engineering preferences, which might have been reasonable to add to the original working group discussion but realistically have no place in a final review. It's not that the preferences are necessarily inappropriate, it's that they typically are not essential to the success of the spec.

But in terms of the specific logic you've used, the presumption of the "it catches errors sometimes" language is that the final output that is the result is somehow perfect. Of course, that's a silly assumption. But as soon as one acknowledges this, we are left with a model that must class this as merely one more review in an imperfect sequence, with the likelihood that an every new review will find more problems. Or we are left with the view that pragmatics dictate accepting imperfections because each step of the quality assurance model -- of which this is a part -- really does need a strict cost/benefit analysis. This needs to be done in terms of trade-offs, rather than an isolated approach that counts the detection of an occasional problem as somehow an inherent and controlling good.

An essential component to such a trade-off based model is recognizing that the ultimate quality assurance step always has been, and remains, the market. No amount of IETF q/a guarantees success. We have lots of failures and we won't ever get to zero.

If the IESG review step really is essential, then we should show a consistent pattern of its finding /essential/ errors that would have caused failure in the field.

But if we can find that -- which I believe we cannot -- we have deeper problems, of course, because that sort of shit needs to be found mch, much sooner.

At base, the IESG review seeks to compensate for seriously inadequate quality control during the development process...

d/
--
 Dave Crocker
 Brandenburg InternetWorking
 bbiw.net

<Prev in Thread] Current Thread [Next in Thread>