ietf
[Top] [All Lists]

Re: I-D Action: draft-arkko-iesg-crossarea-00.txt

2011-12-21 06:33:22
Hi Jari.

Thanks for writing this.

There are some good observations here.

IMO, probably the biggest problem is:

Often, the experts for a particular technical area don't have any real
motivation to work with a group proposing something new/different. And
the default position (probably rightly) is for the "experts" to be
skeptical of the proposed work. Given that it takes resources (e.g.,
time) for the experts to help, this is a real problem. The net result
is a sort of default inertia or "veto" against making real progress on
a new area/idea.

The only real solution is to figure out how to get the critical
parties together in the same room actually working together. And too
often, those proposing new ideas/work don't understand how to engage
with the experts. They may just post a draft and expect reviews to
appear magically.

And there are way more "Bad Ideas" proposed then there are cycles for
experts to spend on these "new ideas". (By Bad Ideas, I mean ideas
that have been proposed many times before, or that to anyone with even
some expertise in the area knows needs a lot of work to be viable). So
asking the experts to spend time on all these new ideas just doesn't
scale. This is not a new problem,

You talk about this on Problem Ownership.

   7.  One good model that has been used in the Internet Area employs a
       protocol detail working group and a consumer working group.

Note that the above is *not* a process issue. It's a cultural issue,
where the "protocol detail working group" has had a discussion and
reached a (reasonable) consensus about what its proper role is. This
has been done in DHC successfully, where the DHC WG is not allowed to
say "I don't think is a useful option" or "I don't think DHC should be
used to do this". Rather, they review DHC options from the perspective
of 1) are they clearly specified, 2) are they generally compatable
with existing implementations/deployments (i.e., no change in the
basic model) and 3) Are they consistent with the basic DHC
architecture?

DNSEXT had the same discussion at one point. In earlier times, every
new RR proposal would spark a "the DNS shouldn't be used for this sort
of thing" or "the DNS should only be used for certain things". Now,
the criteria is more like the DHC case, looking at questions such as
"will this option break DNS operations/deployments" and "is the RR
specfied clearly?".

I think a good recent example where the above didn't take place well
is with Trill/ISIS. Trill needed IS-IS code points, since they used
IS-IS to carry information, but the IS-IS WG appears to "own"
allocation of code points. But there are no written rules listing the
criteria for assigning such code points (another IETF recipe for
unecessary frustration/delay), which led to a lot of confusion and
frustration. This (possibly in conjunction with other issues)
ultimately resulted in a full year's delay in publishing the Trill
documents, even *after* the IESG had approved the base documents. To
an outsider, this delay hard to understand and smacks of IETF
brokeness.

Another recent example concerns the MIF/DNSEXT split. MIF is chartered
to do DNS-related work related to having multiple interfaces in
different adminsitrative domains. DNSEXT has talked about this issue
for years. But has consistently refused to address this
problem. Evenutally, MIF was chartered and ownership for that work is
now in MIF. But arguably, MIF doesn't have the necessary expertise
from the DNS side and isn't getting input from DNS experts like it
would if the work were being done in DNSEXT. Basic questions about how
(and even whether) to do certain things continue to go unresolved,
pretty much guaranteeing an ugly IETF Last Call when MIF finally
declares that it is done.

Thomas

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>