March COOK report Summary ONLY /Diff serv interview and 10 Gig E standards/
2000-02-11 20:03:45
GIGABIT ETHERNET RIDES ECONOMY OF SCALE
AS IT ERASES LAN WAN BOUNDARY GIGABIT ETHERNET MAKES NETWORK LESS
COMPLEX, EASIER TO MANAGE -- 10 GIG STANDARDS WILL DEMAND CHOICES
AFFECTING ATM SONET WAN FUNCTIONALITY, pp. 1- 10
We interviewed Dan Dov Principal Engineer for LAN physical Layers
with Hewlett-Packard's networks division and Mark Thompson product
marketing manager for HP's ProCurve Networking Business on December
6. In Smart Letter 30 on 12/9/99 David Isenberg wrote the following
very good summary of why Gigabit Ethernet is hot. "Since there are
many more LANs than WANs, GigE, due to its Ethernet LAN heritage, has
huge economies of scale. (Every flavor of Ethernet that has hit the
marketplace has slid down a 30% per year price reduction curve.)
GigE's use in both LAN and WAN gives greater scale yet. Plus by
erasing the LAN/WAN boundary, GigE decreases the complexity of the
network, making it even stupider, easier to manage and easier to
innovate upon. So it looks like the Stupid Network will be built of
GigE over glass."
In the Interview Dov takes us through the technology reasons for
Ethernet's increase in speed as its importance in LANs has grown and
LANs themselves get larger and more bandwidth hungry. Ethernet, in
short, is leveraging its ubiquity, low cost and open standards on the
back of the growing importance of the Internet and its increased
bandwidth. In doing so it is playing a significant role in making new
industries like Application Service provision possible.
Dov concludes that "the reason that the Ethernet succeeded as well as
it has, its simplicity. Ethernet is a very simple, yet elegant
protocol. But because of its simplicity, it's extremely inexpensive
to develop and to manufacture Ethernet-compliant devices." Many
people are taking gigabit Ethernet and applying it to wide area
networking because of its simplicity, ease of access and simplicity
of its framing.
In its relationship between volume and pricing Gigabit Ethernet
offers significant values. Gigabit Ethernet initially was being
installed in the local area network to provide interconnection
between boxes that were connecting 100 megabits, and 10megabits, to
the desktop. The volume of that kind of traffic quickly becomes very
large. For these applications over short distances, compared to
OC-24, gigabit Ethernet is actually cheaper, even though it provides
more bandwidth. What people started to realize, because of the volume
of gigabit Ethernet traffic that was going out and the relative
simplicity of it, the cost of gigabit Ethernet ran under cut that of
OC-24 pretty quickly. And the result is that people who are making
the decisions as to what will be used to hook LANs to each other and
to the Internet started deciding to go with gigabit Ethernet, rather
than with the OC-24 or OC-48. Gigabit Ethernet's application is at
the periphery of the internet Therefore it is not being looked tofor
the elimination of SONET add/drop multiplexers.
With ten gigabit Ethernet some people are proposing to basically take
the ten gigabit Ethernet media access controller, the MAC, and
packetize the data, just like we currently do in Ethernet at ten
times the rate. But they then want to send it into a SONET framer.
The SONET framer will then take that data and chop it up and put it
into the SONET frame. The framer will send it across the network and
when it gets received on the other side, it will be effectively
deframed. There are also people that are more focused on taking the
current, simple Ethernet approach, which is just, take the data, put
it onto an optical fiber and ship it on across the link. They don't
want to get into the complexity of SONET framing and so on.
HP's Thompson offered the following analogy: " It's sort of like the
subway system versus the inter city train system. Once, historically,
if you wanted to ride the "train" from the center of one city to the
center of another, you rode the subway system to get out to the train
station, took a train and then subway back into a city. So what we're
talking about now is Ethernet making it robust enough and fast enough
so that your subway car can simply ride from one city to the next and
that you don't have to change the vehicles that are riding on the
tracks, the fiber, in the meantime." In other words a simplistic
design that would work for the people who are working in the local
area networks would also go for people who wanted to do optical
transmission with Ethernet framing cross-country" The interview
concludes with a discussion of the issues being faced in the
development of 10 gigabit Ethernet standards.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EXPLOSION IN CAPACITY CHASED BY EXPLOSION IN USE
FIBER TO THE HOME FROM HP ORACLE AND POWER COMPANIES FOR LESS THAN
$15 A MONTH -- ABOVENET ON THE NEED TO OWN FIBER
pp. 10, 27
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ROLE OF DIFFSERV IN THE DEVELOPMENT OF QOS TOOLS
KATHY NICHOLS EXPLAINS HOW PURSUIT OF VIABLE QOS PROTOCOLS IS
TRANSITIONING FROM CENTRALIZED MODEL TO HORIZONTALLY ORGANIZED TOOL
CHEST FROM WHICH ISPS CAN DESIGN CROSS ISP COMPATIBLE SERVICES,
pp. 11-19, 27
On November 16, we interviewed Kathy Nichols who with Brian Carpenter
is co-chair of the very active Diffserv working group. We asked
Kathy to put Diffserv in its historical context. She replied that
originally people assumed that Quality of Service guarantees would be
needed to do multimedia over the Internet. Integrated Services and
RSVP came out of these assumptions. But RSVP had been designed by
Lixia Zhang and others while Lixia was at Xerox Parc. The design was
made with the assumption that you could put RSVP state into every
router because you would always keep your application inside the
network of a single provider. After several years of experimentation
the emerging view is that RSVP should be seen as a generic signaling
protocol or a way for a host to talk to a network. Other protocols
would govern ways that hosts request things of a network to which
they are talking. One should note that the original work with RSVP
and Intserv was done before April 1995 when the NSFNet backbone was
shut off and when the topology and traffic on the internet to which
people were thinking about applying quality of service were radically
different that what they are now (almost exactly five years later).
By the beginning of 1997 some ISPs were beginning to talk of QoS in
terms of being able to give some of the traffic that they carried
better treatment than other traffic a kind of better best effort.
According to Kathy "the Differentiated Services discussion happened
because some service providers were not happy with the Intserv
approach. They weren't going to let state cross their boundary. They
didn't see how something with that much state could work. And it also
didn't seem to do exactly what they wanted, which included to be able
to tell a customer that they could replace their leased line and give
them at least equivalent service. And it would be a good deal,
because they should be able to offer it cheaper and reuse their
infrastructure."
Traffic for a premium class of service could be relegated into
special queues for that traffic alone. Traffic for best effort and
better best effort could remain in the same queue. In most network
conditions the packets would be treated the same while in exceptional
conditions the mere best effort packets might find themselves
discriminated against. Some of the very best engineers and protocol
designers in the Internet were coming up with schemes for how to do
traffic marking and shaping to accomplish these goals. (The idea that
the same queue can be used to have two different levels of service is
the idea behind weighted RED.) Unfortunately their schemes - call
them tools perhaps - were too often incompatible with each other.
People were designing complex tools to work handle vast amounts of
traffic in complex and rapidly changing situations. Diffserv was
started as a way to bring order out of a very complex chaos. People
wanted to structure a framework for which people could design their
tools and to create a situation where if they designed them to be
compatible with the framework they would be compatible and
interoperable with each other. Diffserv may be thought of as a set
of guidelines within which various quality of service tools may be
implemented.
Kathy states that the only way to scale QoS is to aggregate packets.
If we group them inside of a "cloud" or domain, we will put them into
something called a "behavior aggregate." You create a behavior
aggregate by saying that you will assign each packet that is to be a
member of that aggregate a particular per hop behavior (PHB). PHB
permits the assigning of the same forwarding treatment for all
network traffic that labeled with a given PHB. You may then consider
telling customers that they will pay a certain rate for traffic sent
in conformance with the PHB aggregate they have purchased and let
them know that your routers will drop traffic labelled as conformant
with a given PHB that in reality is found by the router to be non
conformant. One goal is to get the maximum amount of classification
of what may be done with traffic out of a field that is no more than
6 bits per packet. What Diffserv is really doing for ISPs and for
hardware vendor is helping them to work together to establish
reasonable guidelines within which many different quality of service
provisions can created. The idea is that the ISP is allowed to
establish its own QoS offerings. Diffserv has created behavior
aggregates and control planes that can be used to implement the
policy goals of the behavior aggregate. Two ISP may be able to solve
cross ISP policy issues by sitting down with each other and selecting
Diffserv compatible tools that would not have to be the exact same
tool. It is Diffserv's intention to give them tools by which they
can achieve common QoS outcomes by means that inside their respective
networks may be quite different.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ALL RESPONSIBILITY DISINTERMEDIATED FROM DNS FIX
NEW ICANN DOC SHARED REGISTRY SYSTEM ENABLES REGISTRARS, SHARED
REGISTRY AND ICANN TO DISCLAIM RESPONSIBILITY FOR ALL ACTIONS THAT
INJURE REGISTRANTS pp. 20 - 26
In mid January Wired
http://www.wired.com/news/technology/0,1282,33753,00.html published a
delightful summary of the results of Beckwith Burr', ICANN's, and
NSI's redesign of the DNS system. People were buying a domain name
and paying for it at the time of purchase only to see it sold out
from underneath them the very next day to someone else. For the
little guy the Internet's domain name system had been put at risk by
the Clinton Gore bureaucrats. No mater: the large, powerful and rich
had the ICANN uniform dispute resolution policy and the even more
Draconian cyber squatting legislation. ICANN had done a superb job of
freeing the corporate trademark attorneys to do their thing. It had
done this by creating a jury-rigged system where registrars could say
that mistakes belonged to the registry which in turn could say it was
playing by ICANN rules while ICANN disclaimed all responsibility for
breakages in the system.
According to Wired, "ICANN said it was not responsible for domain
name discrepancies between registrars and their customers.
The COOK Report reminds its readers that to be functional a domain
name must be part of the registry database that determines what other
names are taken and is responsible for getting the names into the
root servers where down line DNS servers can find them. The
operation of the new system has been rigged by ICANN so that, while
the registry gets names to advertise, it gets no information about
the owners of the names in whose interest it is doing the
advertisement. This information is known to the Registrars whose
agreements with ICANN give them enforceable rights vis-à-vis the
Registry. But the customers who pay a registrar to act as the
intermediary between them and the registry have no enforceable rights
what so ever to the use of the domain names for which they pay.
We do not know who designed and put in place this truly bizarre
system. It was ICANN but the secret process by which it was done
inside of ICANN has remained opaque to everyone on the outside. As
far as we can tell, ICANN rules by having its Jones Day attorneys,
Touton and Sims work with Esther Dyson and Mike Roberts to establish
policy that disenfranchises every Internet user (who does not also
pay the necessary fees to become a registrar) of any rights to
receive the benefits of the products for which they have paid. The
registrar is fee to do anything it chooses with the domain name that
it sells to the registrant. The system is also dependent for its
operation on a shared registry protocol that has been (according to
the testimony of some outside experts who advised NSI on its design)
implemented in such a way as to make any accountability to the
registrants and even to the registrars unlikely. NSI has sought what
non experts will take as endorsement from the IETF by asking for
publication of the protocol as an informational RFC. One of the
experts who advised NSI in the design has protested loudly against
the move and asked NSI to free him from his non disclosure agreement
so that he may publish his criticism to allow independent observers
to make their own judgements. NSI has refused.
By the end of the month it was clear that the entire shared registry
system was a design failure. As early as late December complaints of
break downs were becoming evident. On December 23 on the Domain
policy list at NSI list member "A" complained " Most whois clients
query the public NSI Registry database first which only updates *once
per day* so it's quite possible for someone to do a domain query and
be shown the old whois information of the old registrar. Nothing is
wrong.
To which list member "B" replied: No, nothing is wrong as far as the
design goes. But of course that [just looking at the design] is not
far enough, is it? Therefore leaving the ability for registrars to
"Steal" domain names and/or create a domain name conflict from the
get go. Doesn't say much for stability, does it? Our article
summarizes debate from the IETF and Domain Policy lists that makes
quite clear the absurdity that the White House and its ice president
is visiting upon the Internet.
Froomkin & Auerbach Offer ICANN Bitter Criticism, pp. 27, 29, 30
Two who have tried to work with ICANN say foul in no uncertain and
bitter terms to move by DNSO to censor DNSO GA mail list. ICANN
Makes it clear it will tolerate no criticism.
****************************************************************
The COOK Report on Internet Index to 8 years of the COOK Report
431 Greenway Ave, Ewing, NJ 08618 USA http://cookreport.com
(609) 882-2572 (phone & fax) Battle for Cyberspace: How
cook(_at_)cookreport(_dot_)com Crucial Technical . . . -
392 pages
just published. See http://cookreport.com/ipbattle.shtml
****************************************************************
<Prev in Thread] |
Current Thread |
[Next in Thread> |
- March COOK report Summary ONLY /Diff serv interview and 10 Gig E standards/,
Gordon Cook <=
|
|
|