On 22nd June 2010, at 10:12:13 CET, Eliot Lear wrote:
This then leads to a question of motivations. What are the motivations
for the IESG, the IETF, and for individual implementers? Traditionally
for the IETF and IESG, the motivation was meant to be a signal to the
market that a standard won't change out from underneath the developer.
The above seems fairly muddled as written.
Traditionally, "the market" refers to consumers, users,
and operators, rather than "implementers" or "developers"
Indeed, moving beyond Proposed Standard has long been a signal
to users, consumers, and operators that a technology now has
demonstrated multi-vendor interoperability.
Further, by moving technology items that lacked multi-vendor
interoperability into optional Appendices, or downgrading
them to "MAY implement" items, that process also makes clear
which parts of the technology really were readily available,
as different from (for example) an essentially proprietary
feature unique to one implementation.
In turn, that tends (even now) to increase the frequency that
a particular IETF-standardised technology appears in RFPs
(or Tender Announcements). In turn, that enhanced the business
case for vendors to implement the interoperable standards.
Standards are useful both for vendors/implementers and also
for consumers/users/operators. However, standards are useful
to those 2 different communities in different ways.
The IETF already has a tendency to be very vendor-focused &
vendor-driven. It is best, however, if the IETF keeps the
interests of both communities balanced (rather than tilting
towards commercial vendors).
Question #1: Is such a signal needed today?
Yes. Users/operators/consumers actively want and need
independent validation that a standard is both interoperable
and reasonably stable.
If we look at the 1694
Proposed Standards, are we seeing a lack of implementation due to lack
of stability? I would claim that there are quite a number of examples
to the contrary (but see below).
Wrong question. How clever to knock down the wrong strawman.
The right questions are:
A) whether that signal is useful to consumers/users/operators
The answer to this is clearly YES, as technologies that
have advanced beyond Proposed Standard (PS) have a higher
probability of showing up in RFPs and Tender Requirements.
As examples, the JITC and TIC requirements pay a great
deal of attention to whether some technology is past PS.
Various IPv6 Profile documents around the world also pay
much attention to whether a particular specification is
B) whether that signal has a feedback loop to implementers/
vendors that still works.
The answer to this is also clearly YES. Technologies that
appear in RFPs or Tender Requirements have a stronger
business case for vendors/implementers, hence are more
likely to be widely implemented.
Items that appear in the TIC or JITC requirements are very
very likely to be broadly implemented by many network
equipment vendors. The same is true for technologies
in various IPv6 Profiles around the world.
Question #2: Is the signal actually accurate?
Is there any reason for a developer to believe that the day after
a "mature" standard is announced, a new Internet Draft won't
in some way obsolete that work?
Again, the wrong question, and an absurdly short measurement
time of 1 day. Reductio ad absurdum is an often used technique
to divert attention when one lacks a persuasive substantial
argument for one's position.
By definition, Internet-Drafts cannot obsolete any
standards-track document while they remain Internet-Drafts.
Only an IESG Standards Action can obsolete some mature standard,
and that kind of change happens slowly, relatively infrequently,
and with long highly-visible lead times.
What does history say about this effort?
History says that 2-track has NOT happened several times already
because people (e.g. Eliot Lear) quibble over the details,
rather than understand that moving to 2-track is an improvement
and that "optimum" is the enemy of "better" in this situation.
Question #3: What does such a signal say to the IETF?
It is a positive feedback loop, indicating that work is
stable and interoperable. It also says that gratuitous
changes are very unlikely to happen. By contrast,
technologies at Proposed Standard very frequently have
substantial changes, often re-cycling back to PS with
those major changes.
Further, the new approach will have the effect of making
it easier to publish technologies at Proposed Standard,
which would be good all around.
I know of at least one case where work was not permitted
in the IETF precisely because a FULL STANDARD was said
to need soak time. It was SNMP, and the work that was
not permitted at the time was what would later become
That is history from long ago, under very different
process rules from now, so is totally irrelevant.
It isn't even a good example of how things generally
worked at that time.
Question #4: Is there a market advantage gained by an implementer
working to advance a specification's maturity?
Again, wrong question.
That noted, the answer is clearly Yes. Early implementers who show
interoperability are well positioned to win RFPs that require a
technology that has moved beyond Proposed Standard, while trailing
implementers often end up unqualified to bid/tender due to the
absence of such a feature.
If there *is* a market advantage, is that something a standards
Yes, because it encourages broad implementation, broad interoperability,
and broad adoption of openly specified standards.
Might ossification of a standard retard innovation
by discouraging extensions or changes?
This is wildly less likely under 2-track than it already
is today, partly because it will be much easier for a
sensible revision to move to Proposed Standard.
Question #5: Are these the correct questions, and are there others that
should be asked?
You've clearly got a lot of vendor-bias in your writing above.
Your note cearly neglected the user/operator/consumer community
(who benefit most from having a 2-track system, although
vendors/implementers also benefit).
Ietf mailing list