ietf
[Top] [All Lists]

Re: Why the IESG needs to review everything...

2011-07-28 20:53:36

On 7/28/2011 7:22 PM, Brian E Carpenter wrote:
Dave, we are shouting past each other so I will not repeat myself
on all points. However,

Brian,

I did not ask you to repeat anything -- and don't want you to.

Rather, I asked you to move beyond cliche's and personal anecdotes into consideration of tradeoffs. That is, I pointedly asked you /not/ to repeat yourself.

Please engage in the substance of such balanced analysis comparing benefits against costs. It's not as if that's an unusual approach for evaluating expensive activities of questioned value...


Have you seen a pattern of having a Discuss cite the criterion that
justifies it?  I haven't.  It might be interesting to attempt an audit
of Discusses, against the criteria...

It might, and at the time when the IESG had a large backlog of unresolved
DISCUSSes and the current criteria were being developed (I'm talking
about 2005/2006), the IESG did indeed end up looking at all the old
DISCUSSes against the criteria, and held dedicated conference calls

I do not recall seeing this analysis made public. The point behind my suggestion was to prmit transparent consideration of the use of Discuss. So whatever was done, it was not transparent to the community.

Further, my suggestion was for /current/ patterns, not in the past.


to talk through many of those DISCUSSes and in many cases persuade the AD
concerned to drop them, or rewrite them in an actionable format. In my
recollection, Allison Mankin was the leader of the charge on this.

But these are all judgment calls, so I don't think there can be an
objective audit.

I don't recall requiring it to be "objective". In fact, audits are often subjective. That's ok as long as:

   a) the auditor gives thought to using reasonable criteria

   b) the criteria are made public

   c) they are applied consistently

Let's try to refrain from throwing up artificial barriers, as an excuse not to hold the process accountable.


There could be an audit of how many DISCUSSes take
more than N months to clear, or something like that. There were tools
around for that some years ago, but I don't know if they exist for the
modern version of the tracker.

That's nice, but not all that useful. In contrast, looking at the substance of Discusses against the criteria that are supposed to be used for justifying them is directly relevant.

(Note that you replaced a focus on core substance and clear import, with something superficial and with a very ambiguous semantic. In particular, longer-vs-shorter holding times have no obvious implication about the /appropriateness/ of the Discusses.)


Herein lies the real problem:  As with many process and structure
discussions in the IETF, folk often see only a simplistic, binary choice
between whatever they prefer, versus something akin to chaos.

The world is more nuanced than that, and the choices more rich.

Here is a small counter-example to your sole alternatives of status quo
or rubber stamp:

      Imagine a process which requires a range of reviews and requires
ADs to take note of the reviews and the community support-vs-objection.

      Imagine that the job of the ADs is to assess these and to block
proposals that have had major, unresolved problems uncovered or that
lack support, and to approve ones that have support and lack known,
major deficiencies as documented by the reviews.

The only difference between that and what I see happening today is that
the ADs actually verify the reviews by looking at the drafts themselves.

You are factually wrong. AD's do their own reviews. ADs formulate their own lists of issues and requirements. AD's assert them as the basis for a Discuss.

They often do pay attention to other reviews -- sometimes quite mechanically, rather than from an informed basis -- but the exemplar I put forward is a fundamentally different model of AD behavior and responsibility. I can't guess how you can misunderstand the difference.


And why do they do that? Because they aren't going to take the
responsibility for approving a document that they haven't read. Nobody
would, I hope.

Therein lies a core problem with the model: It hinges on personal investment by the AD, and a lack of trust in the community process, using the excuse that the process is not perfect, as if the AD's own evaluation process is...

d/
--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf