ietf
[Top] [All Lists]

Re: [aqm] Last Call: <draft-ietf-aqm-fq-codel-05.txt> (FlowQueue-Codel) to Experimental RFC

2016-03-28 10:13:46
Jonathan,

It does make sense.
Inline...

On 24/03/16 20:08, Jonathan Morton wrote:
On 21 Mar, 2016, at 20:04, Bob Briscoe <research(_at_)bobbriscoe(_dot_)net> 
wrote:

The experience that led me to understand this problem was when a bunch of colleagues 
tried to set up a start-up (a few years ago now) to sell a range of "equitable 
quality" video codecs (ie constant quality variable bit-rate instead of constant 
bit-rate variable quality). Then, the first ISP they tried to sell to had WFQ in its 
Broadband remote access servers. Even tho this was between users, not flows, when video 
was the dominant traffic, this overrode the benefits of their cool codecs (which would 
have delivered twice as many videos with the same quality over the same capacity.
This result makes no sense.

You state that the new codecs “would have delivered twice as many videos with the same quality over 
the same capacity”, and video “was the dominant traffic”, *and* the network was the 
bottleneck while running the new codecs.

The logical conclusion must be either that the network was severely 
under-capacity
Nope. The SVLAN buffer (Service VLAN shared by all users on the same DSLAM) at the Broadband Network Gateway (BNG) became the bottleneck during peak hour, while at other times each user's CVLAN (Customer VLAN) at the BNG was the bottleneck. The proposition was to halve the SVLAN capacity serving the same CVLANs by exploiting the multiplexing gain of equitable quality video... explained below.
and was *also* the bottleneck, only twice as hard, under the old codecs; or 
that there was insufficient buffering at the video clients to cope with 
temporary shortfalls in link bandwidth;
I think you are imagining that the bit-rate of a constant quality video varies around a constant mean over the timescale that a client buffer can absorb. It doesn't. The guys who developed constant quality video analysed a wide range of commercial videos including feature films, cartoons, documentaries etc, and found that, at whatever timescale you average over, you get a significantly different mean. This is because, to get the same quality, complex passages like a scene in a forest in the wind or splashing water require much higher bit-rate than simpler passages, e.g. a talking head with a fixed background. A passage of roughly the same visual complexity can last for many minutes within one video before moving on to a passage of completely different complexity.

Also, I hope you are aware of earlier research from around 2003 that found that humans judge the quality of a video by the worst quality passages, so there's no point increasing the quality if you can't maintain it and have to degrade it again. That's where the idea of constant quality encoding came from.

The point these researchers made is that the variable bit-rate model of video we have all been taught was derived from the media industry's need to package videos in constant size media (whether DVDs or TV channels). The information rate that the human brain prefers is very different.

A typical (not contrived) example bit-rate trace of constant quality video is on slide 20 of a talk I gave for the ICCRG in May 2009, when I first found out about this research: http://www.bobbriscoe.net/presents/0905iccrg-pfldnet/0905iccrg_briscoe.pdf As it says, the blue plot is averaged over 3 frames (0.12s) and red over 192 frames (7.68s). If FQ gave everyone roughly constant bit-rate, you can see that even 7s of client buffer would not be able to absorb the difference between what they wanted and what they were given.

Constant quality videos multiplex together nicely in a FIFO. The rest of slide 20 quantifies the multiplexing gain: * If you keep it strictly constant quality, you get 25% multiplexing gain compared to CBR. * If all the videos respond to congestion a little (ie when many peaks coincide causing loss or ECN), so they all sacrifice the same proportion of quality (called equitable quality video), you get over 200% multiplexing gain relative to CBR. That's the x2 gain I quoted originally.

Anyway, even if client buffering did absorb the variations, you wouldn't want to rely on it. Constant quality video ought to be applicable to conversational and interactive video, not just streamed. Then you would want to keep client buffers below a few milliseconds.

or that demand for videos doubled due to the new codecs providing a step-change 
in the user experience (which feeds back into the network capacity conclusion).
Nope, this was a controlled experiment (see below).
In short, it was not WFQ that caused the problem.
Once they worked out that the problem might be the WFQ in the Broadband Network Gateway, they simulated the network with and without WFQ and proved that WFQ was the problem.

References

The papers below describe Equitable Quality VIdeo, but I'm afraid there is no published write-up of the problems they encountered with FQ - an unfortunate side-effect of the research community's bias against publishing negative results.

Mulroy09 is a more practical way of implementing equitable quality video, while Crabtree09 is the near-perfect strategy precomputed for each video: [Mulroy09] Mulroy, P., Appleby, S., Nilsson, M. & Crabtree, B., "The Use of MulTCP for the Delivery of Equitable Quality Video," In: Proc. Int'l Packet Video Wkshp (PV'09) IEEE (May 2009) [Crabtree09] Crabtree, B., Nilsson, M., Mulroy, P. & Appleby, S., "Equitable quality video streaming over DSL," In: Proc. Int'l Packet Video Wkshp (PV'09) IEEE (May 2009)

Either can be accessed from:
http://research.microsoft.com/en-us/um/redmond/events/pv2009/default.aspx


Bob

  - Jonathan Morton


--
________________________________________________________________
Bob Briscoe                               http://bobbriscoe.net/

<Prev in Thread] Current Thread [Next in Thread>