ietf
[Top] [All Lists]

Re: are we willing to do change how we do discussions in IETF? (was: moving from hosts to sponsors)

2006-03-24 13:11:44
On Fri Mar 24 19:50:15 2006, Keith Moore wrote:
> In other words, there are working groups where a substantial number > of people involved in the discussion are not only not going to be > implementing the proposals, but don't actually do any kind of > implementation within the "sphere" - we're talking about people > discussing the precise semantics of some HTTP extension who aren't > involved in doing any webserver related programming, or some people > discussing an email issue who limit their interaction with email to > having an email address.

I don' t have a problem with that. IMHO we tend to design with too little regard for the needs of end users, and we need more
input from knowledgable users, rather than less.


That input needs to be present in defining the problem, not the solution.


> Or, if you prefer, people are talking and not doing the "running > code" bit.

It may be that we place too much emphasis on running code in IETF today.

I'd say we place too little.


In ARPAnet days, when the user community was small and homogeneous but
platforms were diverse (different word/character sizes, different
character sets, different limitations of operating systems and
networking hardware), and goals for protocols were modest, merely being able to implement a protocol across different platforms was one of the
biggest barriers to adoption.  In that environment,  being able to
demonstrate running code on multiple platforms was nearly sufficient to
demonstrate the viability of a protocol.  Besides, since the net was
small, it wasn't terribly hard to make changes should they be found to
be necessary.


We have fewer platforms, and they're all running with the same 8-bit byte, (or as close as makes no difference), and they all do UTF-8 easily, let alone ASCII, so yes, that kind of problem has largely gone away.

However, if you're extending IMAP, say, there's a large number of IMAP servers out there which are, internally, massively different beasts, so the "in my day" argument merely highlights that problems move, they don't go away.


These days running code serves as proof-of-concept and also as a way to validate the specification. It doesn't say anything about the quality
of the design - not efficiency, nor usability, nor scalability, nor
security.  etc.


No. It doesn't say much about the efficiency, usability, scalability, or security, but it does say a little, and it gives me, for one, a lot better an idea about where the problems in all those areas lie. Maybe I'm a drooling idiot, and this is the equivalent of having to read aloud, in which case I'm sorry.


> > What really bothers me is the apparent popularity of a mindset, in a
> > group of people that claims to be doing engineering, that we
> > should just try something without really thinking about it, and > > without a good way to evaluate the experiment objectively.
> > Now, wait - I agree up to a point.
> > Yes, we need to carefully analyze what we're doing, because > experimentation won't easily show if a proposed solution will > actually scale to the level we need, is secure enough, and is > flexible enough to cope with future demands that we've not thought > of. This much is, hopefully, not up for debate. > > But there's a really simple experiment that's easy to do, and results > in a useful, concrete result. The hypothesis to test is "does it > actually work", the experiment is "suck it and see", and the result > is, one hopes, "yeah, I did this", with an optional "but this bit was > tricky" that we can feed back into the design process. > > Unless that experiment is done, we aren't engineers, we're > philosophers.

I agree that those kinds of experiments can be quite valuable, though
I'm having a hard time remembering when such an experiment was
indicated in an IETF WG that I've been involved in.
It's weird, because I thought that pretty well everyone implemented stuff to this level. For a long time - years - it never occured to me that PoC and probably even deployed implementations didn't exist for some specifications, let alone those going onto the standards track.


I have seen several kinds of experiments of the form "let's see what
happens if we do this nonstandard thing with SMTP - will existing
servers handle it?" and I've generally regarded those experiments as
invalid because they tend to lack any analysis of the sample space or
any attempt to get a representative sample.  They can prove that

I've seen discussions of a similar nature, not formal experiments - perhaps we're saying the same thing. I've also seen discussions concerning "are we sure that feature X works in the wild", with the result as "we have done X for some time and have seen no failures".


something doesn't work, but rarely can they demonstrate that something does work reliably in the wild. (OTOH if you know reliably that there are only a few implementations of a protocol, such experiments might be
more valuable.)


I'm not sure that more than a cursory analysis proving something will work reliably in the wild on every possible concept is worth doing, though, until *after* some implementation work has been done. Implementation is cheaper.

I should have said "analyze...before deploying". I also believe in building prototypes and reference implementations, but that's not a
substitute for analysis.

No, it's a fundamental part of it.

Dave.
--
          You see things; and you say "Why?"
  But I dream things that never were; and I say "Why not?"
   - George Bernard Shaw

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>