ietf
[Top] [All Lists]

Re: Idea for a process experiment to reward running code...

2012-12-03 16:04:14
Stephen,

Thanks for proposing this. You know that I agree with you that improving IETF's ability 
to publish specifications relating to real code out there is important. It is important 
to come up with ideas in this space. I think there has been situations where IETF has 
strayed from the "rough consensus and running code" model.

But of course following the running code is not always easy. I have some 
perspectives on this, both from the IESG side and as a document author who in 
some cases had running code. Obviously we want IETF's resources spent on things 
that result in running code, and we want experiences from that running code 
reflected back into specifications. And we want those fast-running 
implementation teams to bring their specs to the IETF rather than publish them 
somewhere else.

But on the other side, there are issues. It is sometimes difficult to 
distinguish useless junk with several implementations from useful and stable 
technology that is going to change the Internet and which also has several 
implementations. And it is difficult to guess whether an implementation is the 
beginning of something great or the by-product of someone's pet project that 
also produced the specs. In many cases it is very cheap to write an 
implementation. You can also easily build an implementation and fund a few 
other folks to build theirs. And even if you have independent implementations, 
I've seen cases where there are seven implementations that inter-operate but 
the spec is still unimplementable for outsiders.

If you want to support IETF work that has implementations better, you have to decide 
where your focus is in the overall process. I suspect that the last calls make up a very 
small part of IETF's overall end-to-end ("draft-00-to-RFC") delay. I do not 
have good statistics of this at the moment, 
http://www.arkko.com/tools/lifecycle/wgdocs.html has some numbers from buggy code that 
has not been run since last spring. Those numbers seems to indicate WG process times in 
the order of two years and LC/IESG times in the order of half a year, plus some RFC 
editor time. Intuitively, the numbers match my own experience, so maybe they are in the 
ballpark.

One of the issues is that it is very rare for document to sail through the 
LC/IESG without any changes. From the same crappy statistics, 
http://www.arkko.com/tools/admeasurements/stat/goodprocessingtime_4.html and 
http://www.arkko.com/tools/admeasurements/stat/processingtime_4.html that's 
maybe 10% of all documents; 20% in some cases but those drafts probably have 
some RFC Editor notes associated with them. A document that does not change 
goes through quite fast, maybe in a month. A document that needs change also 
needs discussion, debate, and waiting for a revision. Hence the half a year. 
Two weeks out of that? Not a big difference. Two weeks out of 2-3 years for the 
entire spec development? Not noticeable.

All decisions involving how you give support or priority to running code 
involve difficult judgment calls. There's running code. Does that trump 
someone's opinion that the bits should be ordered differently? That the design 
should be changed to work better through NATs? That no security is needed? 
There is no hard and fast rule for these questions, you have to evaluate what 
the situation is.

It is not easy to support running code as part of the IETF day-to-day process. 
People test obviously, on their own. Many people bring implementations to IETF 
meetings. Some organize back room meetings to do even proper interops. We saw 
some code in bits-n-bytes, and there's been a few other demos as well. But 
interops are not within IETF scope. It is even difficult to acquire resources 
for testing in all cases. In Atlanta, we asked for a room the hallway was no 
longer sufficient. We did this in the last minute, got some well-deserved 
pushback from the NOC crew (who did a wonderful job and supported us anyway - 
thanks). But we also got some unexpected pushback from the IESG, who among 
other things told us that they can't help us with a room in any way until the 
spec is officially a WG document. But to get to that state we'd need some 
testing to improve the spec and convince people that the stuff works. So catch 
22. Luckily, we got a room from IPSO Alliance, who stepped up to !
help.

Anyway, all this puts me roughly in the same camp as Barry. How about doing 
this instead:

1) Write that IESG statement or new RFC that reminds everyone that they should 
seriously consider what running code tells us when they weigh issues, decide to 
send documents forward, etc. Obviously we should not follow code blindly. 
Perhaps the statement could highlight some of the above issues, or the issues 
from Stephen's draft to make people aware of the complexity of the matter.

2) Re-focus ourselves in the WG and IESG reviews on the stuff that actually 
matters. DISCUSSes on what's in the document header of the I-D... do not matter 
as much as whether the thing actually works, or is i18n capable or secure. Yes, 
I know I've erred on this myself several times.

3) Remember that some of the most successful IETF specifications were things 
that were brought to us from the outside, with implementations and first 
standards revisions already done. It is not a bad model. You do not always have 
to start with a committee. You can finish with a committee.

4) Find ways to support running code in all stages of the IETF process. Could 
we support unofficial interops better in our meetings, or work with more 
interop organizations? Could we use bits-n-bytes to do more demos, and would 
this be useful in promoting running code?

Some more comments on details on your draft:

* The IPR return-to-WG process is actually part of existing operations for all 
drafts. Perhaps you could clarify this.

* I'm not sure why you have taken the position that the sponsoring AD does not 
have to vote Yes in these. I think that should be necessary even more than with 
other drafts. Were you thinking that there's some other AD who knows the case 
better? There are certainly cases where another AD knows more about a 
particular case than the one officially in charge of the WG. In those cases it 
might actually make sense for that other AD to take on the sponsoring role. But 
that seems orthogonal to fast tracking, and is already happening.

* I think you have placed perhaps too much emphasis on verifying the software. 
I think these are judgment calls, you figure out what is available, what its 
functionality is, how it has been tested, and so on. It is hard to codify that 
in a set of rules. Strict rules may actually rule out some things that would be 
useful. For instance, would several proprietary implementations with 
well-documented interop success qualify or not?

Jari