ietf
[Top] [All Lists]

Re: Naive question on multiple TCP/IP channels

2015-02-05 10:31:05
On Thu, Feb 5, 2015 at 9:30 AM, Phillip Hallam-Baker 
<phill(_at_)hallambaker(_dot_)com>
wrote:



On Wed, Feb 4, 2015 at 6:54 PM, Brian E Carpenter <
brian(_dot_)e(_dot_)carpenter(_at_)gmail(_dot_)com> wrote:

On 05/02/2015 08:49, Eggert, Lars wrote:
Hi,

CC'ing tsvwg, which would be a better venue for this discussion.

On 2015-2-4, at 20:22, Phillip Hallam-Baker 
<phill(_at_)hallambaker(_dot_)com>
wrote:

Today most Web browsers attempt to optimize download of images etc. by
opening multiple TCP/IP streams at the same time. This is actually done for
two reasons, first to reduce load times and second to allow the browser to
optimize page layout by getting image sizes etc up front.

This approach first appeared round about 1994. I am not sure whether
anyone actually did a study to see if multiple TCP/IP streams are faster
than one but the approach has certainly stuck.

There have been many studies; for example,
http://www.aqualab.cs.northwestern.edu/publications/106-modeling-and-taming-parallel-tcp-on-the-wide-area-network

GridFTP only exists because of lots of experience that several parallel
FTP
streams achieve better throughput than a single stream, especially on
paths
with a high bandwidth-delay product. I'm guessing that since buffer bloat
creates an artificially high BDP, that could apply pretty much anywhere.

SCTP is not the only one-acronym answer: try MPTCP. The interesting thing
there
is that because there is explicit coupling between the streams, the
throughput
increases sub-linearly with the number of streams.


Buffer bloat is a great example of the consequences of the co-operative
nature
of the Internet for one of the audiences I am writing for (Folk trying to
make/understand Title II regulations).

The Internet is actually a demonstration that the commons are not such a
tragedy
after all.


 If we look at the problem of buffer bloat there are several possible
solutions
and the problem is picking one:

* Persuade manufacturers to reduce buffer sizes so the old congestion
algorithms work.

* Change the congestion algorithm.

* Break with the pure end to end principle and have the middleboxen with
the
huge buffers do some reporting back when they start to fill.


The first is difficult unless you get control of the benchmarking suites
that are
going to be used to measure performance. Which would probably mean getting
the likes of nVidia or Steam or some of the gaming companies on board.


​The missing metric is "latency under load".  Right now, we only measure
bps.
​



The second is certainly possible. There is no reason that we all have to
use the
same congestion algorithm. In fact a mixture might be beneficial. Instead
of looking
for packets being dropped, the sender could look at the latency and the
difference
between the rate packets are being sent and the rate at which
acknowledgements
are being received.


​Doesn't matter.  TCP congestion control has effectively been defeated by
web browsers/servers.  Most of the data is transmitted in the IW's of n
connections, where n is large.  The incentives are such that applications
can/will abuse the network and not be "cooperative". I expect there will
continue to be such applications, even if we "fix" the web to be better
behaved; incentives are not aligned properly for cooperation.

So you can argue all you want about your favorite congestion control
algorithm, and it won't solve this fundamental issue.​



Whether folk are willing to consider the third depends on how serious the
buffer bloat problem is considered to be. If the stability of the Internet
really
is at stake then everything should be on the table. But my gut instinct is
that
a middlebox does not actually have any more useful information available
to it than can be derived from the acknowledgements.


​While you need to run a mark/drop algorithm (self tuning) to keep long
lived TCP flows under control, you can't touch the transient problems we
have short of some sort of flow queuing (fair or not...).  It's head of
line blocking that is killing you.
​

​  Ergo fq_codel combining flow queuing (of a clever nature) with a
mark/drop algorithm.​  The exact details (other than being self tuning) of
the mark/drop algorithm is *much* less important than whether flow queuing
is implemented. Once you isolate the flows, life gets much easier.

​So the days of single, huge bloated FIFO queues have to go.​

Jim


​