there are a couple of problems with this analysis:
one is that it considers only application protocols that are in
widespread use. there are lots of applications that are used by limited
communities that are nevertheless important.
that's a silly question. you wouldn't recognize the names if you saw
them, because they're not in widespread use.
and of course, since NATs
are so pervasive, most of the applications that are in widespread use
have been made to work with NAT (often at tremendous expense, and
Could you explain the tremendous expense a bit more?
several kinds of expense: one is due to added implementation complexity
where you have to implement your own addressing and routing scheme
within the application - and this results also in poorer performance and
(usually) degraded reliability because there are more things in the
signal path that can fail. another kind of expense is that when trying
to set up communication between two or more peers that are all behind
NAT you usually need to have a rendezvous server that is on the public
network that can mediate between the hosts that are NAT-crippled. it
costs money to provide those servers, especially if you get a lot of
usage. this tends to mean that the application can no longer be free or
open source, because there has to be a way to pay for those servers.
this is hugely costly to the user community.
another problem is that it only considers current applications. a big
part of the problem with NAT is that it inhibits the
development/deployment of useful new applications.
As Phillip stated, I don't see the problem with future applications.
that's probably because you don't develop applications, so you can
afford to be naive about them.
Ietf mailing list