Begin forwarded message:
From: Srinivasan Keshav <keshav(_at_)uwaterloo(_dot_)ca>
Subject: [e2e] Why do we need congestion control?
Date: March 5, 2013 15:04:48 GMT+01:00
To: "<end2end-interest(_at_)postel(_dot_)org>"
<end2end-interest(_at_)postel(_dot_)org>
To answer this question, I put together some slides for a presentation at the
IRTF ICCRG Workshop in 2007 [1]. In a nutshell, to save costs, we always size
a shared resource (such as a link or a router) smaller than the sum of peak
demands. This can result in transient or persistent overloads, reducing
user-perceived performance. Transient overloads are easily relieved by a
buffer, but persistent overload requires reductions of source loads, which is
the role of congestion control. Lacking congestion control, or worse, with an
inappropriate response to a performance problem (such as by increasing the
load), shared network resources are always overloaded leading to delays,
losses, and eventually collapse, where every packet that is sent is a
retransmission and no source makes progress. A more detailed description can
also be found in chapter 1 of my PhD thesis [2].
Incidentally, the distributed optimization approach that Jon mentioned is
described beautifully in [3].
hope this helps,
keshav
[1] Congestion and Congestion Control, Presentation at IRTF ICCRG Workshop,
PFLDnet, 2007, Los Angeles (California), USA, February 2007.
http://blizzard.cs.uwaterloo.ca/keshav/home/Papers/data/07/congestion.pdf
[2] S. Keshav, Congestion Control in Computer Networks PhD Thesis, published
as UC Berkeley TR-654, September 1991
http://blizzard.cs.uwaterloo.ca/keshav/home/Papers/data/91/thesis/ch1.pdf
[3] Palomar, Daniel P., and Mung Chiang. "A tutorial on decomposition methods
for network utility maximization." Selected Areas in Communications, IEEE
Journal on 24.8 (2006): 1439-1451.
http://www.princeton.edu/~chiangm/decomptutorial.pdf