ietf
[Top] [All Lists]

Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard

2014-01-23 18:11:29
Hi, Alia,

On 1/23/2014 4:07 PM, Alia Atlas wrote:
I don't want to get in the way of vehement discussion, but I thought we
were on the verge of finding an actual solution...

IMHO, that was a combination of an applicability statement, using SHOULD
for congestion control and checksum, and defining a longer-term
OAM-based approach (as Stewart Bryant suggested) to be able to verify
that packet corruption or excessive drops aren't happening.

Does that sound like an acceptable set?

It answers my concerns (I was concerned about the checksum issue too, though it seems to have petered out). I wasn't tracking whether there were other issues that this doesn't address that were raised, though.

Joe

Alia


On Thu, Jan 23, 2014 at 6:56 PM, Joe Touch <touch(_at_)isi(_dot_)edu
<mailto:touch(_at_)isi(_dot_)edu>> wrote:



    On 1/23/2014 3:32 PM, Edward Crabbe wrote:

        Joe, thanks for your response. Comments inline:


             On 1/23/2014 1:27 PM, Edward Crabbe wrote:

                 Part of the point of using UDP is to make use of lowest
        common
                 denominator forwarding hardware in introducing entropy to
                 protocols that
                 lack it ( this is particularly true of the GRE in UDP
        use case also
                 under discussion elsewhere).

                 The tunnel is not the source of the traffic.  The
        _source of the
                 traffic_ is the source of the traffic.


             To the Internet, the tunnel encapusulator is the source of
        traffic.
             Tracing the data back further than that is a mirage at best
        - and
             irrelevant.


        The 'internet' cares about characteristics of reactivity to
        congestion.
           This is guaranteed by the /source of the traffic/ independent
        of any
        intermediate node.


    Are you prepared to make that a requirement of this document, i.e.,
    that the only MPLS traffic that can be UDP encapsulated is known to
    react to congestion?

    How exactly can you know that?


             The tunnel head-end is responsible for the tunnel walking,
        talking,
             and quaking like a duck (host). When the tunnel head-end knows
             something about the ultimate origin of the traffic -
        whether real,
             imagined, or from Asgard - then it has done it's duty
        (e.g., that
             it's already congestion controlled).

             But that head end is responsible, regardless of what it
        knows or
             doesn't. And when it doesn't know, the only way to be
        responsible is
             to put in its own reactivity.

        This is not fact; it's actually precisely the principle  we're
        currently
        arguing about.  ;)


    Actually, it's a paraphrasing of Section 3.1.3 of RFC5405.

    We can continue to debate it, but until it's been *changed* by a
    revision, it remains BCP.


        I would posit:

        The tunnel doesn't have to know anything about congestion or
        performance
        characteristics because the originating application must.


    That works only if you know that fact about the originating
    application. However, there are plenty of applications whose traffic
    goes over MPLS that isn't congestion reactive or bandwidth-limited.


     > See GRE,

        MPLS, many other tunnel types,


    This isn't an issue for all tunnels until they enter the Internet...


        including several existing within the
        IETF that make use of an outer UDP header.


    Which are all already supposed to follow the recommendations of
    RFC5405. To the extent that they don't, they don't agree with that BCP.

    I'm not saying such things never can or will exist, but I don't
    think the IETF should be self-contradictory. We already agreed as a
    group on such BCPs and other standards, and new standards-track docs
    need to follow them.


                 The originating application
                 who's traffic is being tunneled should be responsible
        for congestion
                 control, or lack there of.

             Perhaps it should be, but that's an agreement between whomever
             implements/deploys the tunnel headend and whomever provides the
             originating traffic to them. The problem is that this isn't
        true for
             the typical use case for this kind of encapsulation.

        How so?  As mentioned before, this is the same case as standard
        GRE/MPLS
        etc.


    It's putting MPLS inside UDP. That's a different case, and the
    reason RFC5405 applies.


             I.e., if we were talking about MPLS traffic that already was
             reactive, we wouldn't be claiming the need for additional
             encapsulator mechanism. It's precisely because nothing is known
             about the MPLS traffic that the encapsulator needs to act.

        The MPLS traffic doesn't have to be reactive, it's the applications
        being encapsulated / traversing a particular tunnel that are
        responsible
        for and aware of path and congestion charateristics.  Because
        the MPLS
        head end knows nothing about the /end to end application 'session'/
        characteristics it /shouldn't/ have anything to do with congestion
        management.


    OK, so what you're saying is that "traffic using this encapsulation
    MUST be known to be congestion reactive". Put that in the doc and
    we'll debate whether we believe it.

    But right now you're basically saying that because you think it's
    someone else's problem (the originating application), it isn't
    yours. The difficulty with that logic is that you (the tunnel
    headend) is responsible to ensure that this is true - either by
    *knowing* that the originating traffic is congestion reactive, or by
    putting in its own mechanism to ensure that this happens if the
    originating application isn't.


              > Are we advocating a return to intermediate

                 congestion control (I like X.25 as much as the next guy,
                 but...).  This
                 is a very stark change of direction.

                 I think mandating congestion control  is not
        technically sound from
                 either a theoretical (violation of end to end
        principle, stacking of
                 congestion control algorithms leading to complex and
        potentially
                 suboptimal results) or economic perspective (as a very
        large
                 backbone,
                 we've been doing just fine without intermediate congestion
                 management
                 thank you very much, and I have 0 desire to pay for a cost
                 prohibitive,
                 unnecessary feature in silicon.)

             Write that up, and we'll see how it turns out in the IETF.
        However,
             right now, the IETF BCPs do require reactive congestion
        management
             of transport streams.

        Which part?  The end-to-end principle, or the aversion to congestion
        control stacking?  These have been implicit in all tunneling
        protocols
        produced by the IETF for the modern internet.


    Sure, and that's reflected in RFC5405 already. However, please,
    PLEASE appreciate that NOBODY here is asking you to put in
    "congestion control stacking"; that happens when you run two
    dynamic, reactive control algorithms using the same timescale on top
    of each other.

    Equally well-known in CC circles is that you CAN - and often
    *should* - stack different kinds of mechanisms at different layers
    with different timescales. E.g., that's why we have an AQM WG -
    because even when all the traffic is TCP, that's not quite enough
    inside the network. That's also why Lars was suggesting something
    coarse on a longer timescale - a circuit breaker - rather than AIMD
    on a RTT basis.

    Keep in mind as well that the E2E argument says that you can't get
    an E2E service by composing the equivalent HBH one; it also says
    that HBH mechanisms can be required for efficiency. That's what
    we're talking about here - the efficiency impact of congestion, not
    the overall correctness of E2E control.


             If you don't want/like that, then either don't use transport
             encapsulation, or change the BCPs.

        These BCPs are defined for an originating /application/.


    Yes, and I don't understand why you (and others) keep thinking it
    matters that there are layers of things behind the tunnel head end.
    It doesn't - unless you KNOW what those layers are, and can ensure
    that they behave as you expect.


        In this case
        the UDP header is simply a shim header applied to existing
        application
        traffic.


    It's not "simply a shim" - if that's the case, use IP and we're
    done. No need for congestion control.

    The reason congestion issues arise is because you're inserting a
    header ****THAT YOU EXPECT PARTS OF THE INTERNET YOU TRAVERSE TO
    REACT TO****.

    If you put in a UDP-like header that nobody in the Internet would
    interpret, this wouldn't be an issue.

    But you simply cannot expect the Internet to treat you like
    "application" traffic if you won't enforce acting like that traffic too.


        The tunnel head does not introduce traffic independent of the
        originating application.


    The Internet ****neither knows nor cares****.

    To the Internet, the head-end is the source. Whatever data the head
    end puts inside the UDP packets *is application data* to the rest of
    the Internet.

    Again, if you are saying that you know so much about the originating
    source that you know you don't need additional mechanism at the
    headend, say so - but then live by that requirement.

    If *any* MPLS traffic could show up at the headend, then it becomes
    the headend's responsibility to do something.

    ---

    Consider the following case:

             - video shows up inside the OS, destined for the network

             - software X bundles that video and sends it to go out

             - software Y puts that data into UDP packets to go
             to the Internet

    So what's the "application" here? To the Internet, it's software Y
    -- the thing that puts the 'application' data into UDP packets. The
    previous steps are irrelevant - just as irrelevant as the singer
    your video camera is filming, as irrelevant as the sun that created
    the light that is reflected off the singer to your camera.

    If software Y knows so much about the steps that lead to its input
    data that it knows it's congestion reactive, nothing more need be done.

    If NOT (and that's the relevant corollary here), then it becomes
    software Y's responsibility to put in some reactivity.

    Joe



<Prev in Thread] Current Thread [Next in Thread>