ietf
[Top] [All Lists]

Re: Appeal from Phillip Hallam-Baker on the publication of RFC 7049 on the Standards Track

2014-02-19 18:38:03
On Wed, Feb 19, 2014 at 4:48 PM, S Moonesamy <sm+ietf(_at_)elandsys(_dot_)com> 
wrote:

Hi Barry,

At 13:10 19-02-2014, Barry Leiba wrote:

Phill took the first step of addressing his complaint to the
responsible AD (me).  This is my response.  If he or someone else
should choose to pursue a further appeal, the next step would be to
take it to the IESG by way of the Chair.


Selective quoting:


  "During the November IETF meeting, I received an appeal from Phillip
   Hallam-Baker to the publication of CBOR, RFC 7049, on the Standards
   Track."

RFC 7049 was published in October 2013.  As Sam Hartman mentioned it was a
decision of the IESG.

  "Phill is questioning the process;"

That falls under the IESG.

The handling of the appeal is odd.


The whole process has been odd.

I told Jari and Barry that I was appealing the decision in Vancouver. I was
told to raise it with the relevant AD first. By the time he had come to a
decision it was that too much effort had gone into building things on top
of CBOR since to undo the original decision.

I now realize that what I should have done was to simply send the original
appeal to the IETF list and not gone through channels as requested.


The fundamental problem here is that as I see it the protocol world is
rapidly converging on JSON as the encoding of choice and there are many
good reasons why it is a better encoding than any of the text based
alternatives on offer (RFC822, XML). It is thus inevitable that people will
ask if a binary encoding of JSON would be better than ASN.1. Which of
course is true and not just because hitting yourself on the foot repeatedly
with a hammer is better than ASN.1.

JSON is good for a lot of things, but not for passing encrypted objects
around unless they are really small. So JOSE is interesting but only in a
very limited field. If we can encode binary data in JSON without a 33%
inflation in size, JOSE suddenly becomes very interesting indeed.

If JOSE didn't inflate the size of a data object, we could use it for data
level encryption. Drop a content type tag and some other metadata on the
wrapper, encrypt the contents and we can have a CMS like blob that fits the
Internet idiom much better (and there is some soon to expire IP that would
make animating that proposal very interesting).


But now we have a binary encoding published on IETF standards track that is
not a binary encoding of JSON but has a subset that could be used to encode
JSON sitting in that spot. It isn't a consensus document, isn't a product
of open process, it is however on IETF standards track. So the job for
anyone doing the job properly has become harder.

So it is kind of like the Sochi which will always be the OoOo* Olympics.

The only thing that makes an encoding interesting is that a constituency of
users agree to all make use of it. JSON has emerged from a field of
thousands of text encodings because it is the best compromise between
simplicity and functionality that has such a wide user base. So having
random specs given standards status when the designers only attempted to
solve the problems they cared about does not help matters.


My main concern is the process question. I really don't care whether CBOR
is a PROPOSED STANDARD or whatever. What I do care about is if I am told
that I have to use it because that is the IETF standard for binary
encoding. And what I care most about is the risk that this approach of 'its
our ball and only we will decide who gets to play' is going to be repeated.

There are cases where it makes perfect sense for a draft to go straight
onto the standards track. There are plenty of extensions to PKIX that are
important enough to write down but not so important that it is worth
running up a whole working group just to decide them.


But a spec whose only purpose is to provide a platform for other specs
really needs to clear a higher bar than not being idiotic and having some
people willing to implement. There have to be people who want to choose to
build on it.


The way I think the process should work is:

1) The IAB identifies the need for a consensus binary encoding for JSON as
a platform the IETF community can build on (or why call it an architecture
board).

2) People write drafts that elaborate on the requirements and/or propose
schemes that meet them, these are published as either EXPERIMENTAL or
INFORMATIONAL.

3) People who are building applications that build on the platform kick the
tires and decide which ones they like / hate etc. Either a consensus
emerges that one encoding is best in which case we can go to step 4,
otherwise we go back to step 2.

4) PROPOSED STANDARD status is awarded in recognition of the fact that
there is a defacto standard that application protocol designers have
converged on. This may involve spinning up a working group to clean up the
documents or not depending on their quality.


I have seen the shotgun standards approach in the past. BEEP tried that
route, it was a complete failure as a result because it moved through the
standards process so fast it never gained any buy in outside the original
design team. So even though it has features that are very sound and the
HTTP/2.0 WG might well end up re-inventing it was still born because the
proposers thought that standards track was the route to driving adoption
rather than recognition of adoption.

Standards is the process of making decisions that don't matter. It does not
really matter what the HTTP protocol syntax is provided that a decision has
been made to choose one syntax for that task instead of five.

So I would like to see the IAB actually doing the job of an architecture
board and identifying gaps in the IETF stack where convergence on one
particular way of solving a problem would be beneficial. That is not saying
that they should be the people providing the solution.

-- 
Website: http://hallambaker.com/
<Prev in Thread] Current Thread [Next in Thread>