ietf
[Top] [All Lists]

Re: Naive question on multiple TCP/IP channels

2015-02-04 14:47:35
On Wed, Feb 4, 2015 at 2:49 PM, Eggert, Lars <lars(_at_)netapp(_dot_)com> wrote:

Hi,

CC'ing tsvwg, which would be a better venue for this discussion.

On 2015-2-4, at 20:22, Phillip Hallam-Baker 
<phill(_at_)hallambaker(_dot_)com> wrote:

Today most Web browsers attempt to optimize download of images etc. by
opening multiple TCP/IP streams at the same time. This is actually done for
two reasons, first to reduce load times and second to allow the browser to
optimize page layout by getting image sizes etc up front.

This approach first appeared round about 1994. I am not sure whether
anyone actually did a study to see if multiple TCP/IP streams are faster
than one but the approach has certainly stuck.

There have been many studies; for example,
http://www.aqualab.cs.northwestern.edu/publications/106-modeling-and-taming-parallel-tcp-on-the-wide-area-network


thanks, looking at it now,



But looking at the problem from the perspective of the network it is
really hard to see why setting up five TCP/IP streams between the same
endpoints should provide any more bandwidth than one. If the narrow waist
is observed, then the only parts of the Internet that are taking note of
the TCP part of the packet are the end points. So having five streams
should not provide any more bandwidth than one unless the bandwidth
bottleneck was at one or other endpoint.

You don't get more bandwidth in stead state (well, with old Reno stacks,
you got a little more, but not much). The real win is in getting more
bandwidth during the first few RTTs of TCP slow-start, which is the crucial
phase when transmitting short web objects.


Ah yes, of course, there is a built in bandwidth throttle in the stack and
it is per TCP connection...

So basically we are looking at an end run round the TCP slow start which
works because the network stack congestion algorithm throttles bandwidth by
TCP connection and not per process.

I know that traditionally we have considered congestion control to be a per
connection thing rather than per process or per user. Those don't exactly
exist in the Internet model. But if we consider DDoS to be an extreme form
of congestion, we have to look at the O/S implementing broader rate
limiting anyway.


The reason it makes a difference is that it is becoming clear that modern
applications are not best served by an application API that is limited to
one bi-directional stream. There are two possible ways to fix this
situation. The first is to build something on top of TCP/IP the second is
to replace single stream TCP with multi-stream.

SCTP has what you call multiple streams in your second option, and is
designed the same way.


Yes, I know we have proposals for both approaches. I am trying to see if
there is a case for one over the other.

If the case for multiple streams is better performance based on friendlier
slow start parameters, maybe these should be allowed without the end run.
If the net is surviving with people starting five streams instead of one,
maybe the slow start parameters could start at five packets per destination
host instead of one per connection. It would be a little more code to
implement but speed is hardly an issue where its purpose is throttling
anyway.


Making use of SCTP means that I have to rely on O/S implementation. Which
is a really long wait. So a pragmatic approach is always going to mean a
TCP+Multiplex option. If I am going to need that anyway, SCTP has to
deliver a benefit other than simplifying my code.