ietf
[Top] [All Lists]

Re: [Gen-art] Re: Gen-art review of draft-hartman-mailinglist-experiment-01.txt

2006-03-22 13:11:10
Sam,

in the gen-art meeting today, you asked me to read and reply to the long note you wrote to Elwyn.
I'm assuming you mean this one.

The discussion in the genarea meeting clarified quite a bit what you are trying to achieve with the draft, and was a very useful background to writing this note. Thank you!

I've taken the gen-art team off the CC list, since they are not otherwise involved.

In summary:



    Elwyn> Were
    Elwyn> the suggested mechanisms eventually adopted, I would have
    Elwyn> some qualms about the possibility of indefinite bans being
    Elwyn> possible without allowing a wider (possibly IETF as opposed
    Elwyn> to IESG) consensus, but that point is currently moot as the
    Elwyn> actual proposals that would be put in place are not
    Elwyn> specified by this document.

I don't think the point is moot.  If there are specific limits on the
IESG's power that should be put in place, here is the place to do it.
Alternatively when we evaluate the experiment we could decide
additional limits are needed.

But let's come back to the question of whether meta-experiments are a
good idea.  I think that in order for 3933 to be a valuable tool many
of the experiments are going to be meta-experiments.  So let me first
explain why I think that's the case and then discuss how to evaluate a
meta experiment.


The primary reason you want to encourage meta-experiments is that a
lot of the hypotheses you want to test involve delegation.  For
example I want to test the hypothesis that the right way to solve the
mailing list mess is to delegate it to the IESG.  I could delegate it
to the IESG as part of an experiment along with a initial procedure.
But if I do that I'm testing a different hypothesis: is delegating
something to the IESG with the whole IETF designing the initial
conditions a way to solve the problem.  As you might imagine that's a
different hypothesis.
I do not believe that is a correct statement.
An RFC 3933 experiment by its very nature involves three different sub-experiments:
- Whether you can get the IETF to agree to running this experiment
- Whether the bodies that administer the experiment can perform the experiment so that we have something to measure
- Whether the experiment produces useful results

I think what you are proposing will lead to failure of the *first* experiment, which means that you won't get to the other two. If you say "give the IESG the power to produce procedures, we're not going to give you even a hint as to what these procedures might be", this is what I would characterize as selling a pig in a poke, and it's perfectly reasonable for the IETF community to ring up a "no sale" on the consensus register.

The theory that there is a different hypothesis being tested if the IESG sets the initial procedures than if the initial procedures are part of the experiment also seems doubtful to me; in both cases what I think will happen is that the IESG will start off with procedures designed by Sam Hartman, and the big difference is that the community will have seen the initial procedures before deciding to run the experiment. The whole IETF has never designed anything and never will; all design is done by specific people, and (hopefully) reviewed by some large-enough fraction of the IETF. That's how EVERYTHING in the IETF works.
  Since I'm on the IESG I'm actually in a
reasonably good position to negotiate an initial procedure that the
IESG will be happy with and that would be similar to a procedure the
IESG would come up with on its own.  However we want 3933
experiments--even experiments delegating things to the IESG--to be
documents that anyone can write.  So we should require that authors of
3933 experiments demonstrate stakeholder buy-in for experiments but
not require that they take actions as if they were the stakeholders.
I do not believe this theory either.

RFC 3933 talks about the IESG and the community. The "stakeholder" concept is of your introduction, and I believe it serves no benefit.
An RFC 3933 proposal author HAS to do three things:

- Write up what he's proposing in enough detail that it can be evaluated
- Convince the IESG that the experiment has enough merit to warrant a Last Call - Generate enough review and comment in the community that the IESG can confidently say that there's community consensus to run the experiment

Certainly, a community feedback that says "I'm one of the people required to do work under this experiment, and I'm not going to do it" is a strong statement that needs consideration when assessing whether there's IETF consensus to run the experiment. But sometimes that's just part of the "rough" in rough consensus, even when deciding to run the experiment will cause someone to resign his role. And sometimes it's not.
The second reason that you want to allow meta-experiments is that we
want to encourage RFC 3933 as the first step in process change.
Process change often results in BCPs.  You want the 3933 experiment to
be reasonably similar to a BCP so that when appropriate you can easily
convert a successful experiment into a BCP.  You would probably
replace any evaluation criteria with results of the evaluation,
replace the sunset clause with something else.  However you want the
operative language to remain the same.  A significant result of the
mailing list discussion is the concern that our BCPs are too specific
and encode operational details.  If you require that meta-experiments
are not allowed you strongly push us in the direction of
overly-specific BCPs.  I think that would be a very bad idea.
Here I agree, but disagree that this document does an appropriate job of it.
I think the separation of "principle" and "process" is a good thing - the principle of this document being (I think) that the IESG sets mailing list management procedures, and the process being the process by which such procedures are set, and the third level being the procedures themselves.

However, just stating the principle alone at an abstract level is not enough to either start or evaluate the experiment.
Finally, we want 3933 experiments to be easy to write.  One of my
personal goals with this particular experiment was to see how easy I
could make it to write the experiment.  I think we want to come away
from this process with the conclusion that writing the document is
easy.  The hard part of process change should be building consensus,
recruiting stakeholders, educating the community and actually trying
to use a process to do superior technical work.  We should not make it
hard for people who clearly know what they want to try to express it.
So I'd like to resist the temptation to raise the bar for experiments
beyond what is necessary.  Bars I'm asked to meet will probably be at
least as high as future experiments.
Here I also disagree. Making experiments easy to write is NOT a goal. Successful change that helps the IETF mission is a goal. Making experiments easy to write, and approving experiments without reasonably careful review, CAN lead to the process turning into an ever-changing morass of conflicting and confusing experiments - this is not something that furthers the mission of the IETF.

I believe your document is not good enough for the process change experiment it wants to achieve - despite the fact that I think an experiment in this area is a very reasonable thing to do, and should be done.
In conclusion, the hypothesis I'm testing is meta, so my experiment is
meta.  i think allowing this is desirable because it allows us to test
the hypothesis, begins to align with an eventual BCP if the test is
successful and supports the cultural engineering goal of making
experimentation the preferred direction for process change.
And my answer is that I believe that your focus on the meta level is a non-useful distraction.

We need a change to the handling of mailing lists. One possible form of that change is to introduce an experiment that clearly separates principles from process from procedures, and gives the IESG the power to change the two latter during the course of the experiment.

I would support running such an experiment. But I cannot support approving the present document.

                             Harald


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>