On Tuesday, January 14, 2014 05:50:45 PM Curtis Villamizar
wrote:
None of the equipment I have worked on (a while back) or
the chips we evaluated recently had such a problem for
IPv6 using only the top 64 bits for forwarding. An
argument was made in the early 2000s (by me and others)
that if we allocated global routes only from the top 64
bits all equipment at that time could forward at
essentially the same rate as for IPv4. The reaction
from the IPv6 community was to completely throw away the
bottom half of the address and make it the host part.
Agree.
There are chips for which looking at a 32 bit prefix or a
64 bit prefix is done within the same pipeline and
therefore takes the same amount of time. Other chips
which parallelize packet handling budget enough
microengines to get the job done (only a part of which
is the MPLS ILM or IP destination lookup).
Agree.
Those line cards (and designs I was recently involved in)
assume what has historically been called a BGP-free
core. Only the IGP routes are carried in the core. BGP
is mapped onto the IGP routes. MPLS carries traffic
across the core. The core has no BGP routes and
therefore needs a lot fewer entries and the ILM can be
put into SRAM rather than use a TCAM and only a small
set of IP routes needs to be supported.
Yes, but a "proper" BGP-free core is only true for IPv4.
If you are running a native IPv6 backbone, you can't support
a BGP-free core for IPv6 (I'll just call it a BGPv6-free
core, for typing brevity).
The only way you can have a BGPv6-free core is to do 6PE,
and like I mentioned before, I don't like tunneled IPv6 (but
lots of other providers do it, good for them).
That is done to eliminate a need for large external DRAM
or TCAM which itself and the SERDES needed to go off
chip produces heat and requires power, board space, and
cooling and therefore reduces density. Putting more
simple filters in the core also helps increase density.
It is more a power to operate and floor space per Gb/s
issue.
All good and well, but where do my native BGPv6 routes go?
For the most part IP/MPLS transport equipment seems to be
going this way, though some people are stuck on MPLS-TP
and trying to build a L2 overlay.
I think MPLS-TP is a way for the incumbents to migrate to
Ethernet and still keep their ITU point-to-point-everything
way of life that they know and love, but let's not digress
:-)...
C have had the LSP forwarding engine for about three years
now. I've never once considered buying it (and I always
support new tech.), and most of my friends that I know
closely haven't either. I have no doubt that some operators
have bought it, but I have no empirical data to know how
quickly it's flying off the shelves.
J's PTX and C's new NCS will continue this trend, but
without implementing IPv6 control planes for MPLS (and the
requisite data plane support), those of us that run native
IPv6 will never have a BGPv6-free core.
So I can only afford line cards and forwarding engines that
don't skimp on FIB, so I can sleep at night knowing my IPv6
network won't fall over.
If you go back to before 2000 at least one provider built
a BGP-free core over Frame Relay, but FR itself soon
died and the overlay didn't scale well. Some providers
wanted to do this over MPLS but the RSVP-TE restoration
times were too long and the fallback to IP routing was
still needed to make up for that.
Was before my time, but I can appreciate that :-).
Unless you run a BGP-free core, whether IPv4 or IPv6.
Then you have plenty of FIB space.
Again, without 6PE, no-can-do for BGPv6.
You put forwarding at full rate and cheaper line cards in
the same paragraph with no mention of smaller FIB size
due to eliminating the BGP routes.
Because without FIB's, where will the native BGPv6 routes
go?
OK ... so this is all interesting but I've lost the
connection between this discussion and
draft-ietf-mpls-in-udp. Hence the OT in the subject.
Would you please remind me what that connection was.
I agree, this is quickly getting off topic, but the source
was:
*****
On Sunday, January 12, 2014 04:59:41 AM
l(_dot_)wood(_at_)surrey(_dot_)ac(_dot_)uk
wrote:
The MPLS assumption is that it's protected and checked by
a strong link CRC like Ethernet, and checked/regenerated
by stack processing between hops; here, in a path
context, with zero UDP checksums MPLS has no checking at
all.
Right, which is probably why routers today can count badly
checksum'ed Ethernet frames, but don't have the equivalent
for MPLS.
I'm sorry, when was MPLS cheap?
Current-generation ASIC's have no problem forwarding MPLS
frames at wire rate. One could go so far as to say that MPLS
has allowed vendors to make cheaper line cards also because
IP FIB's and traffic queues can be scaled down dramatically
(not that I'd every buy such line cards, but...).
Mark.
*****
Cheers,
Mark.
signature.asc
Description: This is a digitally signed message part.