ietf
[Top] [All Lists]

Re: draft-housley-two-maturity-levels-00

2010-06-22 11:14:06
Feature 'bloat' in PKIX is largely due to the fact that the features have
already been accepted in the ITU version of the spec that is being profiled.

But the other reason is that the original PKIX core did not cover the full
set of requirements and that these were added as additional protocols
instead.

If you start from a clean slate, as I had the luxury of doing with the work
that led to XKMS and the SAML assertion infrastructure, you can implement
the whole of PKIX in a core spec of about twenty pages (not including
examples, schemas, etc).

The basic problem in PKIX is not feature bloat, it is the opposite. When
OCSP was proposed people bid the feature set down to the absolute minimum
core. So SCVP was a separate protocol and not an integral part of OCSP as
would have been my approach. And despite each being more complex than XKMS,
SCVP and OCSP combined provide less functionality than XKMS/XKISS.

That is not due to better designers, it is due to being able to look at the
totality of the requirements at once rather than discovering them on the
way.

If you look at a certificate, a crl and an OCSP response you will note that
all three share a set of core properties but they are different entities in
PKIX because they were designed as such. If you have the luxury of a
redesign you can make something simpler. But the only way to slim the spec
down otherwise is to hack out chunks of the spec that people are using - or
at least are going to say they are using.

I can't see anything good coming from an attempt to slim down specs after
PROPOSED. Unless the decision to deprecate a set of functionality is
genuinely uncontroversial there is going to be a faction looking to protect
their code. And IETF process gives them an endless series of opportunities
to do so. Some DNSSEC folk spent four years and then another three years
resisting two changes to their spec that were asserted to be absolutely
necessary to make deployment possible. Trying to remove functionality at
stage three because some people felt the problem should have a simpler
solution is a recipe for paralysis and a huge amount of make-work that will
probably never result in a single feature being deleted.

Take PKIX policy constraints for example. 99% of all Internet applications
would work just fine without any of that code. But there is one very
specific party that has rather a lot of code that is based on the premise
that the code will be there. And nobody can know whether my 99% guess is
accurate or not, it might be 90% or I might be completely wrong and everyone
uses that feature. The point is that nobody is going to know for sure what
people have built on top of a protocol expecting some proposed feature to
stay in the spec. There could be absolutely nobody out there actually using
that stuff in a real application and it would be almost impossible to
distinguish that case from 'almost nobody'. How do you prove a negative?


In short the reason a lot of specs are too complex is not that they try to
do too much. It is the opposite, the original spec did not have enough
functionality and adding this functionality as extensions led to a more
complex result than could have been achieved with a larger set of initial
requirements.


On Mon, Jun 21, 2010 at 11:45 AM, Martin Rex <mrex(_at_)sap(_dot_)com> wrote:

Dave CROCKER wrote:

Interoperability testing used to be an extremely substantial
demonstration
of industry interest and of meaningful learning. The resulting repair and
streamlining of specifications was signficant.  If that's still
happening,
I've been missing the reports about lessons learned, as well as
indications that significant protocol simplifications have resulted.
While the premise of streamlining specifications, based on
interoperability testing, is a good one, where is the indication that
it is (still) of interest to industry?  (I believe that most protocols
reaching Proposed these days already have some implementation
experience; it's still not required, but is quite common, no?)

My own proposal was to have the second status level simply take note of
industry acceptance.  It would be a deployment and use acknowledgement,
rather than a technical assessment.  That's not meant to lobby for it,
but rather give an example of a criterion for the second label that is
different, cheap and meaningful.  By contrast, history has demonstrated
that Draft is expensive and of insufficient community interest.
We might wish otherwise, but community rough consensus on the point
is clear.  We should listen to it.

I would prefer if the IETF retains the third level and puts an emphasis
on cutting down on protocol feature bloat when going from draft to
full standard.

What I see happening is that Proposed Standards often start out with
a lot of (unnecessary) features, and some of them even inappropriately
labelled as "MUST implement".

The draft standard only does some interop testing on a small number
of implementations, not unlikely those participating the standardization
process.  It neither addresses what subset other implementations implement
and what subset is actually necessary for the general use case in the
installed base.

One of the worst feature bloat examples is PKIX.

It contains an awkward huge number of features that a number of
implementations do not support -- and work happily without.
There should either be a split of e.g. 5280 into a "basic profile"
and a "advanced feature profile", or the status for some of the
extensions should be fixed from "MUST implement" to "SHOULD implement"
to match the real world and real necessity.


-Martin
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf




-- 
Website: http://hallambaker.com/
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf