On Sat, Feb 11, 2012 at 12:31:22PM +0100, Roger Jørgensen wrote:
On Sat, Feb 11, 2012 at 9:32 AM, Måns Nilsson
<mansaxel(_at_)besserwisser(_dot_)org> wrote:
On Fri, Feb 10, 2012 at 11:44:42PM -0500, Noel Chiappa wrote:
This is only about allocating a chunk of address space.
For which there is better use than prolonging bad technical solutions.
Address translation has set the state of consumer computing back severely.
It might be all nice and proper according to those who desire to keep the
power of owning a TV transmitter, a printing press or a transaction broker
service.
Do keep in mind that the real driver in IP technology is the ability
for end-nodes to communicate in a manner they chose without prior
coordination with some kind of protocol gateway. NAT and more so CGN
explicitly disables this key feature.
And this is not what the IETF should be doing. The IETF should seek
to maximise the technical capabilities of the Internet protocol
suite so that it may continue to enable new uses of the key feature,
ie. end-node reachability.
Allocating CGN-blessing address space is a clear violation of this.
Is that true?
And if so, why and how can it be formulated or find support it earlier work?
And if it is not true. Why and where do you find support for that view?
I ask because you might touch something quite fundamental there... can
IETF support something that will break/limit reachability on Internet.
IMNSHO, the answer should only be NO to that question. Where can I search
support for this view? I believe, for starters, that:
a/ The original IPv4 Internet was designed with a flat reachability
model, as seen from the host IP-stack. (routers are a different thing,
but indeed the flatness repeats itself, technically, in the DFZ.)
b/ When the Internet faced an address shortage there were two solutions
put forward; one being CIDR and the other being IPv6 (nee IPng,
IIRC). While it is true that there was work codified in RFC 1631 etc that
makes the IETF one of the culprits behind NAT, it is also written that
The short-term
solution is CIDR (Classless InterDomain Routing) [2]. The long-term
solutions consist of various proposals for new internet protocols
with larger addresses.
Until the long-term solutions are ready an easy way to hold down the
demand for IP addresses is through address reuse.
....
This solution has the disadvantage of taking away the end-to-end
significance of an IP address, and making up for it with increased
state in the network.
(RFC 1631, p1)
We here clearly see that the authors tasked with describing address
translation were fully aware that they were breaking E2E, and that
they saw address translation as an intermediate measure to be retired
as soon as suitable long-term measures were available.
Reading further in the sequence of IETF documents describing NAT,
we discover something of a more defaitist position in 3022. The
discussion on disadvantages is still there, but there is less talk
of replacing NAT with v6. Apparently the massive success of broadband
routers was showing.
So. With this little historical research made, I think that we can see
a couple of things:
1. The IETF knew the drawbacks.
2. NAT was only a desperate measure.
3. NAT was to be retired when IPv6 was ready.
4. The IETF failed to predict the massive market penetration of broadband
routers and similar solutions.
The key question then becomes, are the market lemmings throwing
themselves over the NAT cliff reason enough to give up sound engineering
principles? I think that numbering critical devices in a sunset phase
(where IPv4 is, resource-wise) uniquely is a more prudent use for address
space than giving it away to people who've made bad business decisions
(which doing any network business without v6 firmly on the 1-year roadmap
is, post say 2006). Especially since it will be used to keep running a
system that the IETF has known for 18 years to be severely deficient.
--
Måns
_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www.ietf.org/mailman/listinfo/ietf