ietf
[Top] [All Lists]

Re: "The IETF has difficulty solving complex problems" or alternatively Why IMS is a big fat ugly incomprehensiable protocol

2005-09-11 15:45:24

- Good architecture and good design. Placement of
 functionality in the right place. I suspect that we
 don't do enough work in this area. Almost all
 of our activities are related to specific protocol
 pieces, not so much on how they work together,
 what the whole needs to do, what etc.

These days, this seems to be the domain of the "systems" standardization bodies, such as 3GPP and CableLabs. The 3GPP architecture diagram seems to be a good demonstration object, although it is not directly the fault of the IETF. (I think there are some interesting reasons for complexity here, in particular the need for interworking with legacy technology, that also appear elsewhere.)


- Generalization of point solutions. Even major new
 functionality often starts out as the need of a specialized
 group of users. If you always do only what is needed
 right now and don't think ahead -- you will get bloat
 and an architecture that does not work well.

The converse also happens: The assumption that specialized protocols are needed for every new application. The world outside the IETF bubble is starting to largely ignore this for new applications, yielding SOAP and OASIS. (The number of applications that people want to standardize is far larger than the number of IETF working groups ever could be and larger than the number of even semi-trained protocol designers, so this is probably the only scalable solution.)

For example, given a generic RPC mechanism, it is not clear that there's a fundamental need for POP, SMTP and IMAP as wholly different protocols, rather than just (three disjoint) sets of RPC operations. It is unlikely that we can unwind this one, but there haven't been any major new applications proposed for standardization since the interesting IM/presence discussions a few years ago. (One exception I can recall: the BOF on app sharing in Paris.)



- Processes that ensure proposals and standards that
 don't get widely adopted are killed in a timely manner.
 We can decide that proposals don't have enough
 support behind them. We can deprecate standards.

Our current deprecation mechanism is only designed to remove standards that are essentially no longer used (in new designs). This potentially reduces the number of RFCs claiming to be PS, but doesn't really reduce the deployed complexity. Obviously, we aren't even making a whole lot of progress on that more limited score.

 (Note that "widely" is relative term. I don't mean that
 we should never answer the needs of small
 groups. But if a solution for X does not appear to
 be used even in the potential user group, that's bad.)

- Allowing paths for experimentation, innovation, and
 market forces. E.g., some protocol proposals may be
 better produced in IRTF and tested & evolved, rather
 than being cast from day 1 as standards that affect
 all devices.

I suspect a fair amount of complexity is because we had to bolt on various things (NAT traversal, security, reliability or large messages seem common after-market add-ons) or couldn't arrive at decisions during protocol design time. As an example, SIP is more complicated than it has to be because there was a decision to support both UDP and TCP (and other reliable transport protocols). This was expedient in the mid-90s to get deployment, even though we now tell people that you really should run this (only) over TLS.

Henning


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>