ietf
[Top] [All Lists]

Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 19:29:42
At 1:36 AM +0100 1/9/13, Martin Rex wrote:

John Day <jeanjour(_at_)comcast(_dot_)net> wrote:

 The reasons have been discussed or at least alluded to previously in this
thread. The short answer is we have been there and done that: 30 years ago.

 All those tools were developed and used successfully in the 80s.
 I know of cases where doing the formal specification alongside the
 design phase caught lots of problems.   However, there are two
 central problems:  First, in general, programmers are lazy and just
 want to code. ;-)  Using a formal method is a lot more work.
 Second, the complexity of the formal statements that must be written
 down is greater than the code.  So there is a higher probability
 of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.

I believe that the problem with the formal logic is that it is difficult
to both write as well as read/understand, and to verify that the chosen
"axioms" actually reflect (and lead to) the desired behaviour/outcome,
the relative of scarcity of suitable tools, the complexity of the
underlying theory and tools, and the tools' resulting lack of
"intuitive usability".


The tools have been available for quite some time.  It is still very difficult.



 Once you have both, there is still the problem that if a bug or ambiguity
 shows up,

For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.

C'mon. You don't really believe that do you? The statement either is a tautology or naive. Ever see a compiler with bugs? Who verified the verifier? How do you know the verifier is bug free?

As I indicated before, I have been working on this problem since we discovered Telnet wasn't as good as we thought it was. For data transfer protocols, it is relatively straightforward and can be considered solved for anyone who wants to bother. The problem is most don't.

The real problem is the so-called application protocols where dealing the different semantics of objects in different systems makes the problem very hard and very subtle. The representation of the semantics is always of what you think its seminal properties are. This is not always obvious.



  neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.

But maybe you're not really thinking of a defect in the implementation
(commonly called "bug") but rather a "gap" in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.

One person's gap is another person's bug. What may be obvious to one as something that must occur may not be so to the other. Then there is that fine line between what part of the specification is required for the specification and what part is the environment of the implementation.

The human mind has an amazing ability to convince us that how we see the world is the way others do. Having been on the Net when there were 15-20 very different machine architectures, I can assure you that implementation strategies can differ wildly.

I had had great hopes for the Temporal ordering approaches to specification since they said the least about the implementation. (the specification is entirely in terms of ordering events) However, I never met anyone who could design in them.


(see Section 4 of this paper for an example:
  http://digbib.ubka.uni-karlsruhe.de/volltexte/documents/1827
)


Donald Eastlake wrote:

 Another problem is maintenance. Protocols change. Having to maintain a
 > formal specification is commonly at least an order of magnitude more
 effort than maintaining a prose description.

Definitely.  Discussing protocols at the level of formal specification
language would be quite challenging (typically impossible in an open
forum, I believe it is even be difficult in a small and well-trained
design team).

Which means there will always be a danger of "lost in translation."



 And, as has been mentioned before, I'd like to emphasize that the IETF
 experience and principal is that, if you want interoperation,
 compliance testing is useless.

Ouch.  I believe this message is misleading or wrong.

Compliance testing is VERY important, rather than useless,

Compliance testing would actually be perfectly sufficient iff the spec
was formally proven to be free of conflicts and ambiguities among
specified protocol elements -- plus a significant effort spent on
ensuring there were no "gaps" in the specification.

Do you realize the cost of this amount of testing? (Not saying it isn't a good idea, just the probability of convincing a development manager to do it is pretty slim.) ;-)


As it turns out, however, a significant amount of implementations will
be created by humans interpreting natural language specifications,
rather than implemenations created as 100% bug-free by-products of a
formal proof tool, and often, interop with such buggy, and often
spec-incompliant implementations is desirable and necessary since
they make up a significant part of the installed base.

My impression is that few implementations these days are done from scratch. Available software is commandeered and adapted. So it would seem getting the core aspects of the protocol well implemented and let laziness takes its course! ;-)



 The way to interoperation is interoperation testing between
 implementations and, to a first approximation, the more and the
 earlier you do interoperation testing, the better.


The #1 reason for interop problems and road-block to evolution of
protocols is the wide-spread lack of compliance testing (on the flawed
assumption that it is useless) and focus on black-box interop testing
of the intersection of two subsets of protocol features.

The problem with interop testing is that it really doesn't provide
much of a clue, and the results can _not_ be extrapolated to features
and areas that were not interop tested (typically the other 90% of the
specification, many optional features and most of the extensibility).


The near complete breakage of TLS protocol version negotiation is a
result of the narrow-minded interop testing in companion with a complete
lack of compliance testing.


If done right, pure compliance testing can go a long way to providing
interop.  The only area where real interop testing is necessary is
those protocol areas where the spec contains conflicts/inconsistencies
that implementors didn't notice or ignored, or where there is a
widespread inconsistencies of implementations with the specification,
of created by careless premature shipping of a defective implementation
and factual creation of an installed base that is too large to ignore.

That last sentence is a doozy. You are saying that interop testing will be required with bad implementations with sufficiently large installed base that you have to live with it?

John


-Martin

<Prev in Thread] Current Thread [Next in Thread>