ietf
[Top] [All Lists]

Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-03 07:44:09
----- Original Message -----
From: "Yoav Nir" <ynir(_at_)checkpoint(_dot_)com>
To: <mrex(_at_)sap(_dot_)com>
Cc: "t.petch" <daedulus(_at_)btconnect(_dot_)com>; <ietf(_at_)ietf(_dot_)org>
Sent: Tuesday, November 02, 2010 5:08 PM

Strange. I look at the same facts, and reach the opposite conclusions.

The fact that there were many implementations based on drafts of standards shows
that industry (not just us, but others as well) does not wait for SDOs to be
"quite done".  They are going to implement something even we label them
"danger - still a draft pretty please don't implement"

Everybody in our industry has heard of Internet Drafts. They know that these are
the things that end up being RFCs, which are, as others have said, synonymous
with standards. If we don't get the drafts reviewed well enough to be considered
"good enough to implement" fast enough, industry is just going to ignore us and
implement the draft.

My conclusion is that we can't just ignore industry and keep polishing away, but
that we have to do things in a timely manner.  One thing we've learned from the
TLS renegotiation thing was that it is possible to get a document from concept
to RFC in 3 months. Yes, you need commitment from ADs and IETFers in general
(IIRC you and I were among those pushing to delay a little), but it can be done.

It's a shame that we can't summon that energy to regular documents, and that's
how we get the SCEP draft which has been "in process" for nearly 11 years, and
it's still changing. But that is partially because we (IETFers) all have day
jobs, and our employers (or customers) severely limit the amount of time we can
devote to the IETF. But that's a subject for another thread.

Time to get back to that bug now...

<tp>
Perhaps we should step back a little further, and refuse to charter work that
will become an RFC unless there are two or more independent organisations that
commit to producing code.  There is nothing like interoperability for
demonstrating the viability (or not) of a specification, and likewise, two
independent organisations are likely to bring two rival views of what should and
should not be specified.  Those not implementing can watch the two slugging it
out, and provide a balanced judgement when something needs consensus.

And two organisations with an interest might want to see a ROI sooner rather
than later.

Tom Petch
</tp>





Yoav

On Nov 2, 2010, at 5:09 PM, Martin Rex wrote:

t.petch wrote:

From: "Andrew Sullivan" <ajs(_at_)shinkuro(_dot_)com>

Supppse we actually have the following problems:

   1.  People think that it's too hard to get to PS.  (Never mind the
   competing anecdotes.  Let's just suppose this is true.)

   2.  People think that PS actually ought to mean "Proposed" and not
   "Permanent".  (i.e. people want a sort of immature-ish level for
   standards so that it's possible to build and deploy something
   interoperable without first proving that it will never need to
   change.)

   3.  We want things to move along and be Internet STANDARDs.

   4.  Most of the world thinks "RFC" == "Internet Standard".

I think that this point is crucial and much underrated.  I would express
slightly differently,  That, for most of the world, an RFC is a Standard
produced by the IETF, and that the number of organisations that know
differently are so few in number, even if some are politically
significant, that they can be ignored.


The underlying question is acutally more fundamental:
do we want to dillute specification so much that there will be
multiple incompatible / non-interoperable versions of a specification
for the sake of having a document earlier that looks like the
real thing?

There have been incompatible versions of C++ drafts (and compilers
implementing it) over many years.  HD television went through
incompatible standards.  WiFi 802.11 saw numerous of "draft-N"
and "802.11g+" products.  ASN.1 went through backwards incompatible
revisions.  Java/J2EE went through backwards-incompatible revisions.


Publishing a specification earlier, with the provision "subject to change"
appears somewhat unrealistic and counterproductive to me.  It happens
regularly that some vendor(s) create an installed base that is simply
to large to ignore, based on early proposals of a spec, and not
necessarily a correct implementation of the early spec -- which is
realized after having created an installed base of several millions
or more...


Would the world be better off if the IETF had more variants of
IP-Protocols (IPv7, IPv8, IPv9 besides IPv4 and IPv6)? Or if
we had SNMP v4+v5+v6 in addition to v3 (and historic v2)?
Or if we had HTTP v1.2 + v1.3 + v1.4 in addition to HTTPv1.0 & v1.1?


I do not believe that having more incompatible variants of a protocol
is going to improve the situation in the long run, and neither do
I believe in getting entirely rid of cross-pollination by issuing
WG-only documents as "proposed standards".


What other motivation could there be to publishing documents earlier
than vendors implementing and shipping it earlier?  And if they do
that, there is hardly any room for any substantial or backwards-
incompatible changes.  And the size of the installed base created
by the early adopters significantly limits usefulness of any features
or backwards-compatible changes that are incorporated into later
revisions of the document.


-Martin
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf

Scanned by Check Point Total Security Gateway.

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf