In message <E8014931-944D-42C9-A950-F3B1CFB1B0C5(_at_)muada(_dot_)com>,
Iljitsch van Beijn
On 7-sep-2005, at 1:04, Steven M. Bellovin wrote:
Either the firewall
successfully blocks the protocol and the firewall works and the
protocol doesn't, or the firewall doesn't manage to block the
protocol and the protocol works but the firewall doesn't. So whatever
happens, someone is going to be unhappy.
Not at all. Often, a firewall needs to know a fair amount about the
protocol to do its job. FTP is the simplest case -- it has to look
the PORT (and, in some configuration, the PASV) command. H.323 and
are more complex.
I'm not very comfortable with the notion of having a third party
device deciding what is valid communication between two hosts
connected to the internet. This is just too fragile. For instance, a
popular filter on *BSD (they're all named [i]pf[w] so I can never
remember which is which) is unable to handle RFC 1323 window scaling
properly. PIX firewalls truncate(d) EDNS0 packets. ICMP packet too
bigs are filtered in many places, as is ECN.
I recognize that carrying all existing firewalls to the scrap heop
won't immediately solve our problems, but we do have to realize that
current filter practice do almost as much harm as they do good. We
really need better stuff here.
(It's amusing to see that to some people, security means encrypting
their communication, while to others it means inspecting that same
I opt for each in its place. I'm also an advocate for distributed
firewalls. But I *really* don't want to refight the whole firewall
issue yet again; I've been through that too many times in the last
decade or so.
For right now, though, the issue is engineering. Again, the vast
majority of hosts are behind firewalls. Is the philosophical issue
that important that we should ignore it? I don't think so.
--Steven M. Bellovin, http://www.cs.columbia.edu/~smb
Ietf mailing list