Brian E Carpenter wrote:
With some reluctance, I have't changed your cc list. But
my conclusion is that this particular discussion belongs
on the RRG list as much as anywhere.
On 2008-12-02 09:52, Keith Moore wrote:
(Because at present the "we need NATs for routing" argument looks, to my
intuition, a bit like handwaving.
I don't think that is quite the argument. It is more like this:
1. We know of no alternative to a longest-match based approach to
routing lookup for the inter-AS routing system (commonly known
as the DFZ).
2. To control the long-term scaling of that approach, we need to
control the long-term size of the lookup table and the long-term
rate of dynamic updates to that table.
3. We assume there will be continued unbounded growth in the
number of sites requiring multihoming, but today each site that
requires multihoming thereby requires its own entry in the DFZ
lookup table. In the long term, this is unsustainable.
[Digression about numbers and dates omitted.]
4. The known solutions to this all require some mechanism for
aggregating site prefixes into ISP prefixes. [There's space for
a digression about the related advantages of separating
the transport ID from the network address, and about the
application layer's view of all this, but that doesn't
affect what I just said.]
5. One solution to that is a multi-prefix model (a site runs
multiple prefixes), which is of course the IPv6 design assumption.
Another solution is a map-and-encap model. A third solution is
a map-and-translate model. There are many variants of all these
models, but I don't know of a fourth one. All solutions have
advantages and disadvantages.
Nice summary. I mostly agree with all of the above.
IMHO the reason we're hearing about NAT66 is because people
still find the IPv6 multi-prefix model unfamiliar and there is
no consensus-based map-and-encap solution yet. So it's natural
to look at map-and-translate, and NAT66 is one of the solutions
in that category.
I don't think it's just that the multi-prefix model is unfamiliar.
There's plenty of reason to believe that it won't work well. Static
address selection rules, no way for hosts to know which prefixes will
work better, inability of most existing transport protocols to fail over
to alternate addresses. I won't claim that the problems can't be
fixed, but I don't think they're solved problems yet - so this looks
like moving a problem rather than fixing it. And it also blurs the
division between the network's responsibility and the host's
responsibility, which doesn't seem like a good thing.
Regarding map-and-translate: To me it seems important to make a clear
distinction between the case where a network at the edge uses a NAT to
translate between "private" addresses (e.g. ULAs) and PA addresses
(which is what I understand NAT66 to be); and the case where there
address translation is done by routers at the edge of the network core
in order to efficiently transmit packets through the network, and the
inverse translation is done when the packet exits the core so the packet
as received has the same source and destination addresses as the one
sent. Even if the hardware required to do the two kinds of translation
is pretty similar, the impact on applications is very different. I
don't think the latter would have much effect on apps at all, though it
would affect diagnostic tools.
It would be really interesting to try to evaluate each of multi-prefix,
map-and-encap, and map-and-translate according to (a) how much each
approach affects applications and hosts in general, and (b) how
deployable each approach is - where deployability has a lot to do with
having the parties that benefit most (probably those that want to
multihome) being the ones that bear the additional cost, and having the
benefit be worth the cost.
Ietf mailing list