The biggest questions I have are:
- where to put this bit?
Right now, the *only* way an L2 with varied service levels can derive
what service levels to use for best-effort traffic is to perform a
layer violation. Continuing this tradition, the bit would be:
less_than_perfect_L2_error_detection = (ip.protocol == IPPROTO_UDPLITE)
Additionally the L3-L2 mapping can layer-violate up into the UDP-lite
header and extract information about unequal error protection it may
need i.e., apply more protection to data covered by the UDP-lite
checksum than the uncovered data. Of course, the same thing can be
done for unequal error detection (do apply good L2 error detection to
the part covered by the UDP-lite checksum).
Ugly? Yes.
Unfortunately, the alternative is to make L3-L2 mappings a first-class
citizen in the IP architecture and provide a way for an application to
provide information that could help/influence this mapping. As RSVP
has demonstrated, this is hairy stuff (even ignoring multicast).
- are there unintended consequences of doing this that we can forsee?
Some issues raised at the plenary were:
-- the old NFS-on-UDP-without-checksums story. Folks using UDP-lite
for non-error-tolerant data get what they deserve.
-- a general feeling that nobody knows what will blow up when L2s do
less error detection. "Allowing" this only for clearly defined cases
(like ip.protocol == IPPROTO_UDPLITE) might help to allay these
well-founded fears.
Gruesse, Carsten
PS.: those who don't know what all this is good for might want to read
up on voice codecs like GSM 06.10 that have data fields for excitation
energy -- generally, it is better to have a slightly corrupted packet
with new values for this arrive than to error-conceal an erasure.
Of course, once this starts to get implemented, there are other
possible applications, e.g. application-layer FEC.