ietf
[Top] [All Lists]

Re: Last Call: 'Procedures for protocol extensions and variations' to BCP (draft-carpenter-protocol-extensions)

2006-09-05 11:24:54
Sam,
this question is important and interesting.

IMHO there are two different types of interoperablities, which in turm makes billions of them: you have interoperability (direct plug) and metainteroperablity (the current through the plug can interoperate through a converter when the plugs are of different format). The end to end interoperability of the network devices, and the brain to brain interintelligbility of the network users.

This is a major issue I am quite digging into, because this is also true within the inner data structure. For example, if you define a variable as a string, you cannot use a number. But in some cases the number can be a pointer to a string. Interoperbility between strings and numbers. This is for example ISO 3166 which codes the names of countries (not countries) and ISO 639 the name of languages. What real sense makes a langtag when people come and say these are details conceived by politics?

Since a data can be a metadata for other data, this is an infinite possible chain. Interoperability is the chaining. I call it for the time being, the syllodata (gathering data) "shared layer?" by the metadata and the data (a smart dioptre). I my network model the most important layer is the interapplicative layer (extended services, my job for 20 years). These two edge shims are quite important because this where the OPES or security are. If a system have stable and secure syllodata (the mortar), one can think it will be difficult to break but this mortar which interlink has actually a name: "inteligence" (what interlinks). You see quickly that even if you have simple conditional interlinks, if you are recursive you can have immensely complex system. This is where W3C semantic web and XML do not work. They are mono-Internet defaulting to a flat space. Just an hyperlink. Rigid end to end interoperability. Now, think about smartlinks and you enter the distributed reality. This is also where is operated the digital decoherence (on one side you have information unsecure, multiple, repated quantums of data [datagrams] and on the other side you have stable, readable, copyable files, music, mails.

What is also interesting is to compare this concept with the infradata (internal data on the data) and the paradata (the data about the way the data are - but not permanent), the archidata (the permanent structure of the data), the philodata (what makes data to fit the metadata). You then realise that the definition-information couple is actually "intelligatum", a [definition -[inferentiation]- information] trilogy. You can start from definition you have induction, from information you have deduction, and from inferentiation you have abduction.

I think it is very important for network protocols, because the more their interoperability scales the more global they are. For example "From:" has a low scalability (language, function). But "True" has an high degree of scalability. But our common thiking makes is to be binary (yes/not, 0/1). This blurs our thinking and our mutual interoperability (protocols). Reality is yes/not/possible with all the possible gradations. There is a South-American Indian language which is ternary (Aymara). Kids think complex mathematics very easly.

Just a small quick attempt at interoperability.
I would be interested by your comments.
jfc








At 19:27 05/09/2006, Sam Hartman wrote:




So, I was reading Brian's draft and I noticed that it talks a lot
about interoperability, but does not actually define interoperability.

As discussed in a recent IESG appeal, it's not clear that we have a
clear statement of our interoperability goals.  There's some text in
section 4 of RFC 2026, but we seem to actually want to go farther than
that text.

I propose that we add a definition of interoperability to Brian's
document.  In particular, we want to talk about our desire that all
implementations of one role of an IETF spec interoperate (at least
with regard to mandatory features) with implementations of
corresponding roles.  For example if we have a client-server protocol,
then you should be able to take any client and have it work with any
server at least for the mandatory parts of the protocol.  There are a
lot of complexities--for example while we hope every IP stack works
with every other IP stack, two machines may not share a common
upper-layer protocol or application protocol.  But I think we should
try and write down the core concept of this.

If people would find this useful I can try to write text.


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf
<Prev in Thread] Current Thread [Next in Thread>