2008/3/9 Hallam-Baker, Phillip <pbaker(_at_)verisign(_dot_)com>:
Its a bootstrap problem though. You have to establish the market conditions
to favor multicast deployment.
True, which is why I believe the IETF stopped short of producing a
complete set of tools to overcome the end-to-end dependency of
multicast services.
The problem with Internet Multicast is that it's an all-or-nothing
delivery solution - every router in the path must support AND have PIM
enabled or you don't get the bits. And there are too many parties
involved to coordinate deployment globally (not unlike IPv6). If IGMP
had been designed
as a multi-hop protocol from the beginning, early adopters would have
been able to benefit from multicast content delivery to the edge of
their domain, or their multicast providers domain. But beyond that
there are just too many edge networks to coordinate or incent to
deploy multicast for someone else's content.
Today we have AMT as a way to get multicast content from the edge of
the multicast boundary over the unicast edge network to the receivers.
I've often heard people say that the future is VOD/P2P and multicast
has missed it's opportunity. Seems to me Opra may have a different
opinion. Do you think Opra would add
draft-ietf-mboned-auto-multicast-08.txt as recommended reading to her
book club? ;-)
There will always be a high-demand live/time-critical event which
would draw a large time audience if we could only give it too them.
Greg
They have not been right for general adoption to date. But as you point out
we can reach a state where the incentives are right for the isps.
Sent from my GoodLink Wireless Handheld (www.good.com)
-----Original Message-----
From: Patrik Fältström [mailto:patrik(_at_)frobbit(_dot_)se]
Sent: Sunday, March 09, 2008 06:49 AM Pacific Standard Time
To: Hallam-Baker, Phillip
Cc: Jeroen Massar; IETF discussion list
Subject: Re: Was it foreseen that the Internet might handle 242 Gbps
of trafficfor Oprah's Book Club webinars?
On 8 mar 2008, at 21.18, Hallam-Baker, Phillip wrote:
> That's not how people tend to view Web video, there might be 50% of
> the crowd watching it as Oprah speaks but the rest are likely to be
> time shifted from a few secs to hours or even days.
As others have said, this time 500k people (out of 750k) really wanted
to view this at the same time...
> An application layer architecture with in built caching would be
> more efficient.
I completely agree with this. Caching/storage etc of video and other
distributed data have during the years moved closer and closer to the
end user (consumer of the data).
At first, we only had direct transmission of TV. Directly from the TV
camera to the viewer. Then we had caching in the form of tapes at the
TV channel. Now we have DVRs at peoples homes, which imply even TV
series that are produced long in advance could be distributed
beforehand and cached on peoples DVRs (encrypted -- so at the time of
the "live" transmission, the data is unlocked). How many things/hours
are actually live, *really* live?
And this "pre-distribution" could be done over IP-multicast.
Patrik
_______________________________________________
IETF mailing list
IETF(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf
_______________________________________________
IETF mailing list
IETF(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf