ietf
[Top] [All Lists]

Design for Deployment (yes you IPv6 wg!) RE: Was it foreseen ..

2008-03-10 07:37:11
To expand on this in response to comments in the meeting:

1. Design for Deployment

Anyone who wants to change the Internet infrastructure needs to consider the 
problem of deployment as their single biggest concern. The Internet now has a 
billion users. Changing the Internet, even in a small way takes a huge amount 
of time and effort.

Anyone who wants to rely on one of the big platform providers to deploy an 
infrastructure needs to talk to some of the IETF participants who work for 
those companies talk about their own difficulties getting their own company to 
adopt their own proposals. Such decisions are made by product managers, not 
engineers. 

In particular some people need to stop thinking that anyone other then 
themselves has a duty to evangelize for their work product. You build a product 
for the Internet you have, not the Internet you might want. Getting an RFC 
through the IETF is the beginning of the process, not the end.

I would like to see a deployment statement included in all RFCs and 
architecture statements that require infrastructure changes. This should state 
why the authors think that the parties who are required to make changes have an 
incentive to do so.


2. There has to be a path from A to B

A lot of IETF protocols suffer from the problem that they work well in the 
state where they are ubiquitously deployed but there is no practical path from 
where we are to where we want to be. 

Often there are people who consider preserving the architectural purity of the 
design to be more important than developing a path from A to B. For several 
years I have been telling people that we have to embrace NAT if there is going 
to be the slightest chance of deploying IPv6. In the plenary we are going to 
have a demonstration of why that is essential, the IPv6 only Internet worketh 
not without at least two NAT conversions being built into the stacks.


3. There has to be an economic incentive to travel the path from A to B.

Ten years later the multicast world is still failing to make an economic case 
for deployment. I cannot obtain multicast on my home network and my provider 
has no plans to support it. And even if it did I have no reason to expect that 
my local network would support it.

That is one reason I prefer the application level cache approach, there are no 
infrastructure preconditions. It can be deployed in the current Internet 
infrastructure.

But note that as PAF points out, once you have an infrastructure of caching 
application layer distribution points you can use multicast to distribute from 
the upstream feed.

One approach here would be for the multicast group to attempt to tell the cache 
infrastructure group that they are required to make multicast a precondition 
for deployment. Unless the cache group is able to refuse this will inevitably 
kill the killer application. 

Now consider what happens if the cache distribution points are designed so that 
they will function on the Internet as is but allow use of multicast as an 
optimization. Suddenly the ISPs have an incentive to deploy multicast - it has 
a dollar return and its dollars that the ISPs care about saving, not packets.


I make a much longer version of this argument in my book, The dotCrime 
Manifesto. The principal problem in securing the Internet is not design of the 
cryptography. Crypto is important and useful of course but crypto alone is not 
enough. There has to be a business model for deployment and a business model 
for use. 

We now have a conference series on the economics of computer crime. Considering 
the economics of deployment is a critical part of deploying the solution. Ross 
Anderson has some resources on this topic.



-----Original Message-----
From: ietf-bounces(_at_)ietf(_dot_)org on behalf of Hallam-Baker, Phillip
Sent: Sat 08/03/2008 9:18 PM
To: Jeroen Massar; Patrik Fältström
Cc: IETF discussion list
Subject: RE: Was it foreseen that the Internet might handle 242 Gbpsof  
trafficfor Oprah's Book Club webinars?
 
The problem with multicast in this application is that it only works if all the 
clients are accepting the same data stream and viewing it live.

That's not how people tend to view Web video, there might be 50% of the crowd 
watching it as Oprah speaks but the rest are likely to be time shifted from a 
few secs to hours or even days.


An application layer architecture with in built caching would be more 
efficient. Similar to the peer to peer schemes in use today but with the caches 
installed by the local ISPs who are complaining about their bandwidth getting 
swamped. 

-----Original Message-----
From: ietf-bounces(_at_)ietf(_dot_)org 
[mailto:ietf-bounces(_at_)ietf(_dot_)org] On 
Behalf Of Jeroen Massar
Sent: Saturday, March 08, 2008 5:43 PM
To: Patrik Fältström
Cc: IETF discussion list
Subject: Re: Was it foreseen that the Internet might handle 
242 Gbps of trafficfor Oprah's Book Club webinars?

Patrik Fältström wrote:
[..]
P.S. And if multicast is in use, or unicast or some 
othercast, that is 
from my point of view part of the "innovation" the ISPs have to do 
(and will do) to ensure that the production cost is as low 
as possible 
so that their margin is maximized.

I actually see a bit of a problem here as multicast would 
lower the usage of links, as such, they can't charge as much 
as with link that is saturated with unicasted packets. Thus 
to lower the use in the internal network one would use 
multicast, but the client would then still have to get 
unicast so that for every listener they are actually paying...

Greets,
  Jeroen


_______________________________________________
IETF mailing list
IETF(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf

_______________________________________________
IETF mailing list
IETF(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf
<Prev in Thread] Current Thread [Next in Thread>