ietf
[Top] [All Lists]

Re: Firewalling for the new millennium, was: Problem of blocking ICMP packets

2004-05-08 08:43:47
On 8-mei-04, at 3:12, Mark Smith wrote:

Filtering on protocol/port numbers is a broken concept. When
are we going to take the time to come up with a *real* security
architecture? One that allows hosts to receive wanted packets
and reject unwanted ones,

I've only read the abstract, however Steve Bellovin's
"Distributed Firewalls"
(http://www.research.att.com/~smb/papers/distfw.html) seems to
suggest exactly that.

Yes, this is good stuff. But I don't think distributed firewalling on its own is the full answer.

Interestingly, with all the recent attacks on Microsoft software,
they seem to be going down this distributed firewalls path, where
each host has a firewall. I'm not sure if they are aware of
Steve's paper, or whether it is a result of these worms almost
always seeming to be being able to bypass any network based
security in place anyway. I'd suspect the latter.

You are assuming the existence of network based security (ie, a traditional firewall or the by-effects of NAT) in the first place. Despite the way it seems some times, not everyone uses NAT. And many of those that do drop all incoming sessions on a fixed internal address because maintaining a list of specific port mappings is too much work. Last but not least, there is physical movement of infected systems. (Laptop infected at home brought to work.)

The part that I don't get is why a host is running insecure services in the first place and then blocks those services. The only reason for doing that I can think of is that those services are for host-local use. Now obviously it would make lots more sense to make these services available to the loopback address only. This is what MacOS does, for instance.

Linux and other OSs have already have firewalls built in, so
maybe we are seeing an unplanned and evolutionary transition to
this model.

There are two issues that aren't dealt with in this model:

- link-local services
- undesired locally initiated communication

The reason why systems like Windows have so many open services is (in part) that it's desirable to share files and so on in small work groups without having to do much, if any, configuration. Now this is a legitimate need, and any security device that gets in the way will invariably be disabled in a good percentage of all cases. So what we need is a good way to keep local services local. That doesn't fix any holes those services may have, but it does mitigate the potential for wide-scale abuse. (Those fences on the observation deck at the top of the Empire State Building don't cure suicidal tendencies but they do make 34rd street safer to walk.) Various approaches come to mind. The obvious one is using unroutable address space for these services, but that doesn't work because of address translation and the address selection problem in the presence of multiple source and/or destination addresses. But this can also be done using a TTL hack such as RFC 3682 uses for BGP, or by setting aside a range of portnumbers for site local services and then filter these ports at site edges.

Undesired locally initiated communication can be fixed by Zone Alarm like filters, where filtering (also) happens based on which application is trying to communicate rather than just the information inside the packets themselves. The problem with Zone Alarm is that the local user has to decide what applications can and can't do, which is often too much to ask from end users. A distributed policy mechanism not unlike the one proposed in the paper by Steve Bellovin you cited could help here. Especially if the application version is taken into account: as soon as a vulnerability is found in an application, it can be blocked by changing the policy in a central place. When users upgrade to a fixed version of the application they can immediately communicate again.

rather than the current one where any
correlation to whether a packet is wanted and whether it's
rejected seems coincidental at best? One that at least
entertains the possibility of doing something about denial of
service attacks? And, last but not least, one that allows
reasonable protocols, carrying desired communication, to
function without undue breakage?

I've understood that what you have described is the end-goal
of end-to-end, opportunistic encryption and authentication ie.
IPsec. Once the network can't tell what type of traffic it is,
ie. the port numbers (or protocol numbers if IPsec is run in
tunnel mode), these network based firewalls will be useless, and
hopefully will be turned off.

They are of limited use even today, as it's trivial to change port numbers if you control both ends of the communication. The only thing that's easy to block (or allow) in a current firewall is communication towards known services on known ports.

That wouldn't necessarily remedy denial of service attacks
though. I think denial of service attacks will always be possible
if entities can issue traffic to the network in an unregulated or
unidentified manner.

I think it's possible to remedy denial of service by having service providers do proxy IPsec AH verification. I haven't found the time to write a draft about this so far, though. One of the open questions is whether this can be done if correspondents are running regular ISAKMP or changes on both ends are necessary. And obviously this requires "you-didn't-think-I-could-do-line-rate-crypto-at-10-Gbps-did-you-ha-ha- you-stupid-scriptkiddie" type line cards...

An "IPsec" only Internet would provide a disincentive to DoS, as
I'd presume that it implies that end-points are uniquely
identified, which allows responsibility for these attacks to be
attributed. That may not be a world we want to live in though as
anonimity in communications can also be a useful privacy feature.

I certainly don't want an IPsec-only internet as the extra overhead, both in CPU cycles and packet size, is significant. But there are large classes of applications that would benefit from IPsec. I'm not worried about the privacy aspect: with SSL only the server needs a certificate in order to protect the communication. I don't see why this model wouldn't work for many applications of IPsec.

In a few respects, DoS attacks and Spam are similar - they rely
on or assume near or absolute source anonimity, and very low
costs of transmission. If, or hopefully when, any solutions are
found to the spam problem, the fundamental methods or techniques
may be able to be applied to DoS attacks.

There is a very fundamental difference: spam can only come in as fast as the mail server is willing to accept it, as spammers must play nice at least as far as RFC 793 goes. DoS traffic on the other hand, is sent entirely unilaterally, the destination has no way to stop it or slow it down.


_______________________________________________
Ietf mailing list
Ietf(_at_)ietf(_dot_)org
https://www1.ietf.org/mailman/listinfo/ietf


<Prev in Thread] Current Thread [Next in Thread>