ietf
[Top] [All Lists]

RE: [narten(_at_)us(_dot_)ibm(_dot_)com: PI addressing in IPv6 advances in ARIN]

2006-04-21 11:21:51
Brian E Carpenter wrote:
...
The problem with your challenge is the lack of a defined topology. The
reality is that there is no consistency for topology considerations, so
the
ability to construct a routing model is limited at best.

Actually my challenge asked for an assumed geographical distribution
and an assumed set of ISPs and interconnects, I believe. Obviously
one needs to know how robust the result is under reasonable variations
of the assumed topology.


I may have missed part of the thread, but the message I was reacting to just
said:
You'll have to produce the BGP4 table for a pretty compelling simulation
model of a worldwide Internet with a hundred million enterprise customers
and ten billion total hosts to convince me. I'm serious.

Let me throw out an example:
With existing IGPs the number of nodes has no impact on the routing system,
but I would interpret the 10B requirement to mean at least 10k subnets, and
more likely 50k subnets per site. This would imply we are working with a
site prefix of /48. Given that today we think we can build routers capable
of 1M entries, it is not much of a stretch to get to 5M by the time there
would be wide-scale deployment of PI. Take this completely arbitrary list as
an example of regional exchange centers:
5 Million Moscow
5 Million Istanbul
5 Million London
5 Million Paris
5 Million Newark
5 Million Atlanta
5 Million Seattle
5 Million San Jose
5 Million Mexico City
5 Million Bogota
5 Million Sao Paulo
5 Million Tokyo
5 Million Beijing
5 Million Shanghai
5 Million Hong Kong
5 Million Singapore
5 Million Sydney
5 Million Bangkok
5 Million Bangalore
2.5 Million Cape Town
2.5 Million Dakar
I am too detached from current cable-heads to know the right list, but the
example list could aggregate your 100M requirement. If you want to fit in
current technology find an appropriate list of 100 cities at 1M each. In the
example case assume 10 inter-regional transit providers, each with 3 diverse
paths to each of the cities in a full mesh; their routers would be parsing
through local needs plus ~70 entries. In the 100 city case it would be local
plus ~310. Either way the 'system' would handle 100M sites in chunks that
are manageable by individual routers. Realistically there would probably be
another 20-50k entries for organizations with enough money to influence some
of the providers and gain a widely-distributed routing slot, but if
individual providers want to distribute more detailed knowledge to optimize
their traffic/circuit mappings, it is their trade-off to make. The rest of
the world doesn't need to know the mess they have made for themselves. The
point is that BGP does not require any router to have full information. That
requirement comes from ego's about who is paying for transit. Making this
example work requires redefining the operational roles of exchange points
and transit providers, as well as a workable settlements standard.

The issue on the table is that we have RIRs trying to create policy to
restrict access to PI space to only the 20-50k deep-pockets with no solid
metric to set a bar; when the reality is that everyone except the carrier
looking for a lock benefits from PI. The challenge is aggregating out those
who only need local independence to foster serious competition. 

...
As far as I am concerned BGP is not the limitation. The problem is the
ego
driven myth of a single dfz where all of the gory details have to be
exposed
globally. If we abolish that myth and look at the problem we are left
with
an answer where BGP passing regional aggregates is sufficient.

I'm sorry, I don't think I've ever seen a convincing argument how such
aggregates could come to pass in the real world where inter-regional
bandwidth is partitioned at link level or dark fibre level. There just
isn'tany forcing function by which mathematically close prefixes
will become topologically close, because there's nothing that
forces multiple providers to share long-distance routes.

There are only 2 forcing functions for any carrier action; regulation &
pain/cost mitigation (well 3 if you count greed as distinct from and the
income side of cost). Why don't carriers carry full real-time explicit
routes for all existing hosts or even subnets today (there are periodic
attempts to push a global spanning-tree)? They have chosen an existing
pain/cost mitigation technical approach known as BGP, which aggregates out
irrelevant details for those outside the local delivery network. Why are we
having this discussion? Because carriers have been allowed to run amuck and
deploy random topology, trading low fiber cost against any concern about the
impact to their routing system (note the worst offenders for routing table
bloat are pushing deaggregates to optimize traffic to their random
topology).  

Regulators -could- put a stop to that, but I would encourage the carriers to
take voluntary action to mitigate the pain. Unfortunately that path requires
setting aside egos and recognizing that random circuits may reduce the cost
of paying a transit provider but they significantly increase another cost.
Compounding the situation is the IETF's engineering mindset that likes the
challenge of finding the complex technical solution, when all that may be
needed is a policy shift that would reduce the topological complexity. There
is a middle ground here where, like local networks announce a prefix
externally, regional networks handle the local details and acquire transit
to get to other regions. The open question is how to define the regions. The
question about which players get which roles is not an IETF concern and will
sort itself out when a cost reducing technology emerges. 


Yes there
will be exception routes that individual ISPs carry, but that is their
choice not a protocol requirement. Complaining that regional aggregates
are
sub-optimal is crying wolf when they know they will eventually loose to
the
money holding customer demanding non-PA space. The outcries about doom
and
gloom with PI are really about random assignments which would be even
less
optimal.

The fundamental question needs to be if there is an approach to address
allocation that can be made to scale under -any- known business model,
not
just the one in current practice. It is not the IETFs job to define
business
models, rather to define the technology approaches that might be used
and
see if the market picks up on them. Unfortunately over the last few
years
the process has evolved to excluding discussions that don't fit in the
current business models, despite the continuing arguments about how
those
models are financial failures and need to change. The point that Scott
was
making is that there are proposals for non-random assignments which
could be
carried as explicit's now and aggregated later.

I understand his point very well and I'm even in favour of it, because
statistically it can only help aggregation and certainly not damage it.
But I think that without some radically new principle it will at best
very slightly reduce randomness.


The principle you are looking for is standardized settlements. We are not in
that business other than to develop any technology that might be needed. In
the mean time if we do not do the work to define structured PI assignments
we will guarantee that the routing system will have to deal with the
randomness that will happen as enterprises force the PI issue. Leaving this
to the RIRs will only guarantee that we have different assignment approaches
in different parts of the world. 

What we lack is a forum to
evaluate the trade-off's.

So is your favorable perspective shared by the current IESG? In other words
if a bof were proposed on the topic, would it be turned down as out of scope
and in conflict with the currently stated solution in shim6?

Tony



_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf

<Prev in Thread] Current Thread [Next in Thread>