Avira's firewall

Oh *please*, spare me that "layers" bullshit.

Personal firewalls do not increase the security of a server. They increase the attack surface (larger codebase, thus most likely more vulnerabilities) and the overall complexity of the system, and thus actually *lower* your security.

[...]

That's what you already filter at the network boundary. No need to filter yet again on the server.

And managing firewalls centrally instead of managing services centrally is more appropriate, how?

They don't. Period.

cu

59cobalt
Reply to
Ansgar -59cobalt- Wiechers
Loading thread data ...

Hello, Ansgar!

You wrote on 11 Apr 2010 20:52:39 GMT:

| Personal firewalls do not increase the security of a server. They | increase the attack surface (larger codebase, thus most likely more

So.. a firewall belongs in between what you protect, and what you protect it from.

Reply to
gufus

*chuckle*

I want my opinion to stand, so I have to allow yours to stand. Even if I disagree with it. Thus, we will agree to disagree. Does that work for you?

That is a different point. One that no one has brought up before. Do you have any examples to show?

I'm not so much filtering the same thing that the edge firewall is filtering. Rather, I'm filtering other things that other servers behind the edge firewall could attack.

I'm sure that the edge firewall is filtering NetBIOS ports, but what happens if another system in the network gets infected with something / web site gets breached and starts attacking your other servers? This is the type of thing that I think the host based firewall is meant for.

I'm not saying that centrally managing services is not appropriate. I know of multiple smaller shops that can't afford centrally managed services, yet they are running a network based AV scanner with firewall that they can centrally mange. Thus, they can centrally manage the firewall but not the services.

That's your opinion.

Grant. . . .

Reply to
Grant Taylor

I guess it'll have to.

formatting link

If you have to do that, you have a server placement issue. Boxes that shouldn't be able to access what the server is providing, should not be located in the same network segment.

This is the exact type of thing, that firewall can't protect you from (unless you're using a sanitizing reverse proxy or something).

Again: any service that should be accessible, cannot be protected by a packet filter. Any service that shouldn't be accessible, should not be running (or at least not be listening on the external interface) in the first place. It really is as simple as that.

They can't afford using the tools that come with the operating system, but can afford to buy a centrally manageable host-based firewall solution? You have to be kidding me.

"sc /?" tells you why you're wrong.

A quite substantiated opinion, no less.

cu

59cobalt
Reply to
Ansgar -59cobalt- Wiechers

Hello, schtebo!

You wrote on Thu, 8 Apr 2010 04:50:02 -0700 (PDT):

s> I think default Firewall from Microsoft should do it for us all.

After setting up a few off-the-shelf firewalls, and getting frustrated with everything, I'm back to using Win NT stock firewall, everything is back working again.

Good advice. :)

Reply to
gufus

Fair enough. ;-)

Interesting. I will have to do some follow up reading on that.

I think we mis-understand each other. Let me give an example.

Suppose that a hosting company has multiple IIS web servers behind an edge ingress filtering firewall that only allows traffic to TCP ports 80 and 443 through. With in the network the servers also allow SNMP and / or RPC for remote computer management.

What prevents a web site on one of these hosts from becoming compromised and running a local program that starts attacking the other systems in the local subnet. This local program would have unfettered access to SNMP and / or RPC to the other servers that are behind the edge ingress filtering firewall.

Conversely if the web servers were running a software based firewall, they could easily filter SNMP and / or RPC traffic so that only the management station(s) could access them. There by protecting them from the program running locally on the compromised server.

These types of side attacks (if you will) are what I'm saying that a software based firewall will help prevent.

I'm not sure that I understand what you are trying to say.

The closest that I can come up with is that the edge firewall is doing egress filtering.

What if you modify my above example of the server farm where one interface is public and another interface is private (think DMZ / management network) and the local program starts attacking the internal network. Again, I believe that the software based firewall would help protect other servers from the attack.

A perfect example of a service would be to not run SSH on the external interface, yet run it on the internal interface for remote management.

I believe you mis-understand what I'm getting at.

I'm not aware of any utility included in either 2k3 or 2k8 that allows changes to multiple IIS web servers at one time. I.e. do not process requests from the w.x.y/24 network.

You are correct that there are ways to administer the operational state of a service in such as is it started / stopped / etc. That does little to prevent a service from talking to a given subnet.

I'm sure it is. ;-)

Grant. . . .

Reply to
Grant Taylor

Sorry, but that's just ridiculous. If you're that concerned about security, you don't allow SNMP or RPC in the first place. Period. Rather than running additional code on the servers, you'd lock them down tight, update them frequently, and monitor them closely.

You don't seem understand how SNMP works. What exactly prevents compromised server A from spoofing the source address of the SNMP packets it sends to victim server B on the same network segment? The protocol is UDP-based after all.

You mean the "sanitizing reverse proxy" thingie? Those are not about egress filtering, but ingress filtering. They sanitize (i.e. rewrite/ canonicalize) the input data stream going from a client to a server, and thus protect a server from malicious user-supplied data. mod_security for Apache is an example of this kind of software.

As explained above, this won't necessarily work as you expect.

SSH is a perfect example of a service that does not need to be "protected" with a local firewall at all. You disallow password authentication and restrict which user can login from where.

If you're referring to exploitable vulnerabilities: trying to "protect" SSH with some kind of personal firewall would just move the problem from sshd to the personal firewall instead of solving it, and I clearly trust SSH more than any personal firewall. IPv6, anyone?

I don't consider the potential gain in security (which may be a lot less than you expect, as explained above) worth the additional complexity and effort in keeping another piece of software up-to-date.

Of course not, because that's not what management of services is about. I believe I already said that if you want that level of isolation, you're far better off putting the servers in separate DMZs.

cu

59cobalt
Reply to
Ansgar -59cobalt- Wiechers

I was just using SNMP / RPC as an example.

For the sake of discussion, please provide a service that would be needed internally to support line-of-business applications (even in a DMZ) that would not be allowed externally.

I do understand SNMP well enough for this discussion. There is nothing that prevents the compromised server from spoofing any thing.

However, I think we can agree on the fact that there is an order of magnitude difference in complexity in mal-ware that is capable of spoofing IP and possibly MAC addresses verses not doing so and relying on the OS IP stack. Likewise, I believe there is quite a bit of difference in the number of each.

You can't protect against everything. There is a point of diminishing return with more security.

No. I mean an edge firewall that is (hopefully) only allowing replies from TCP ports 80 and 443 (and possibly some ICMP) as well as only allowing the internal subnet as a source IP range.

I am perfectly aware of what a reverse (or forward) proxy is for and can do. I was not bringing them in to this discussion.

Aside from IP spoofing and your opinion that the firewalls present a bigger target, I fail to see how this will not work or at least help prevent (read: slow down / limit attack) internally initiated attacks.

Other than the fact that SSH is a little more intelligent about the application layer, I believe it too is equally susceptible to the IP spoofing that you were referring to above. (Granted, once successfully spoofed, there is a greater hurtle to overcome at the application layer with encryption RSA keys and the likes.)

I will agree that SSH is quite a bit more hardened than most public services, and can probably withstand quite an onslaught.

For the sake of discussion, suppose that the server farm that we are talking about is for multiple MS-SQL servers that have to allow inbound connections, at least from the systems behind the edge firewall.

That is a valid opinion that I can't argue with. Nor can I say that it's logically wrong. The only thing that I can say is that mine differs from yours.

I was referring to something specifically meant to remotely manage the configuration of aspects of services in such as you can control what IPs that SSH (or what ever) will talk to.

I am referring to a server farm / DMZ of servers for a given task, off by them selves. I.e. a subnet dedicated to web servers or email servers or db servers or ...

Or do I mis-understand you in such as you are stating to put each individual server in it's own DMZ away from other servers?

Grant. . . .

Reply to
Grant Taylor

The only services that come to mind are Remote Desktop and SSH.

No, actually we can't agree on that, as it's just plain wrong. Unless you are talking about script-kiddy level, spoofing of addresses (either IP or MAC) is the most basic of the basics. And in case of UDP sending the packet with a fake sender address is all there is to it. It's neither difficult nor complex at all.

[...]

Because with UDP you don't need to establish a connection. You write the spoofed sender address to the packet, fire and forget.

On top of being a lot more intelligent at the application layer, SSH (unlike SNMP) is also TCP-based. How do you think the compromised host is going to receive TCP response packets when they're not going back to the attacker's IP address? Unlike UDP, TCP is not stateless.

That (and the user/source restrictions) come on top of the problem of intercepting/spoofing a TCP connection.

Please be more specific about the scenario. By "from the systems behind the edge firewall" you mean connections from within some LAN (management or whatever) to the servers in the DMZ? What kind of connection? Why wouldn't RDP suffice? Why can't the connection be tunneled (e.g. with stunnel) in case RDP does not suffice?

In a scenario like that: if an attacker can exploit one server, he can exploit the other (similar) servers just the same. No need at all to take a different route for compromizing them.

For a server farm as you described above: no. But as explained above, there's no need to further isolate them anyway. For servers carrying out different tasks it might be an option.

cu

59cobalt
Reply to
Ansgar -59cobalt- Wiechers

RDP.

I was referring to script-kiddy.

I'm of the opinion that little will stop a properly motivated skilled attacker.

The rest of the chaff is what I'm thinking about protecting against.

The compromised host would need to be in the return path or local LAN of the spoofed host.

Agreed.

Let's say that it's a routed VLAN that is firewalled and using globally routable IPs for the servers in said VLAN. (Said another way, the same broadcast domain.)

RDP or SSH should suffice for management. But what about some other service that is used by the server. - I've never messed with it, what ports need to be open for MS Cluster Server to communicate with each other?

As long as the edge firewall will allow access to the other servers (not doing some sort of load balancing based on source IP that would ensure that one IP would talk to one server) sure.

That is also assuming that all the servers are serving the same content. That assumption might not be the case for a web farm that assigns a (vulnerable) web site to some but not all servers.

Grant. . . .

Reply to
Grant Taylor

That's the protocol Remote Desktop uses. So, what about it?

Script-kiddies are no serious threat to properly maintained systems. It's the determined attackers that you need to defend agains. They are the guys that will cost your business real money.

[...]

TCP is not SMTP. If the compromised host spoofs the source address, the response packets will not go back to the compromised host (unless the attacker gets the switch into hub-mode, which your monitoring should notice).

I didn't have to deal with it either, but the fine documentation [1] mentions these:

Cluster Services 3343/udp RPC 135/tcp Cluster Administrator 137/udp Randomly allocated ports 1024/udp - 65535/udp 49152/udp - 65535/udp (Server 2008)

However, since the cluster nodes need to be able to talk to each other, there's nothing a personal firewall can do about protecting these.

A vulnerable web-site is not the same as a vulnerable service. And although the vulnerability may be exploited to compromise another service or even the system (through SQL injection for instance), this kind of attack can be done from the outside as well.

[1]
formatting link
Regards Ansgar Wiechers
Reply to
Ansgar -59cobalt- Wiechers

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.