Interesting URL for Firewalls

"steve h." wrote in news: snipped-for-privacy@g9sbk21.scmhc.local:

What if you DO want to run some servers on your desktop for your own use but don't want anyone outside that host to use those servers? You might want a web server running so you can design and test your web pages before uploading them to your public server, but you do NOT want anyone else using your web server, so just use the firewall to make sure any

*external* connections to port 80 are not allowed. A stateful firewall should reject any inbound connection attempts that were not the result of a prior outbound connection to that inbound source. You get to run your own servers which will be using the standard ports (or non-standard ones) without fear of some hacker finding them and misusing them. You might want to leave the Messenger NT service running but only allow IP addresses in the range of your subnet to make inbound connects using that service. Rather than kill off a possibly useful service that could be utilized on your own intranetwork, just block it from being misused from inbound connections from outside your intranetwork.

Also, many of the features of a firewall go beyond the simplistic inbound-protect (with or without stateful packet inspection). Some include URL filtering so you could, for example, make sure that any images, links or redirects through Doubclick get blocked. They include parental controls based on categorization (which admittedly requires the web site owner to actually categorize their site, or you could include blocking of any site that doesn't categorize themself, assuming they would voluntarily correctly categorize themself). You might want to generate accounts within the firewall based on the Windows login account so that you can control where users of those accounts can navigate to, like letting your kid when logging under their own account only get to Disney sites and lockout the e-mail ports so they can only visit whatever site provides them with protected and regulated webmail. While I configure my firewall not to bother me with intrusion alerts, there are times when I'd like to check if there have been any to see if there is enough info to find out from where the attack originated. Some firewalls include privacy protection, like blocking Referrer unless you define exception rules for a domain (for those that you trust that demand they know from where you came to ensure you came from one of their web pages to get to another of their web pages). Ad Blocking is nice but, again, you need the ability to define an exception for those where the block on the URL was not for an advertisement but something legit and non-spammy. Some include spam detection but I've pretty much settled on using SpamPal and its plug-ins.

The outbound protection (i.e., prompting to authorize) for applications is a bit over hyped. You run a program, you get a prompt, you authorize that program to allow it thereafter to make Internet connections, and you forget about it thereafter because you don't get prompted anymore. However, some programs can be called by other programs to make a connection on their behalf. Programs can use IE, for example. So unless you configure your firewall to alert you to an unauthorized program calling another previously authorized program, you really cannot be sure what program is really attempting to get a connection. My firewall has the option to alert me to other programs trying to use a previously allowed program when making a connection, but I suppose not all firewalls have that option. The other handy function of alerting on outbound connection attempts is simply to know when there is something trying to make a connection. Do you know if a newly installed application will try to phone home? Even if you know, might you not want to ensure that it can only visit specific hosts for those updates and didn't get hacked or imitated to connect anywhere? Many times there are applications that I find will attempt outbound connections for which I haven't a clue nor was there any documentation telling me about that covert function. It's a way of keeping the application maker honest about what their software will do.

So, yeah, if your definition of a personal firewall is what was available pre-1996 and without stateful packet inspection then there isn't much point in running one. However, with firewalls evolving into security suites then they do have definite value. If you do severe security maintenance and management of your system then you don't need a firewall but this only works if you are the wizard, you are the only one using your computer(s), you are 100% diligent in your actions, and you know well all your applications even beyond their included documentation. While such a scenario does exist, you don't base community need on a rarity.

Reply to
*Vanguard*
Loading thread data ...

formatting link
He sure doesn't like ZA or the rest of them.

Reply to
steve h.

"Thor Kottelin" wrote in news: snipped-for-privacy@anta.net:

At additional cost and complexity beyond the level of protection needed for a "personal" host? The article discussed *personal* firewalls for

*personal* hosts, not protecting an enterprise or corporate network. Even in a corporate network, I don't even trust other employees and will still use a "personal" firewall to protect my from within. Not many companys use more than a boundary firewall (i.e., there are no firewalls between intranetwork segments).
Reply to
*Vanguard*

That is a very old rant. Everybody has an opinion. Mine is that a properly configured quality software firewall will keep the bad guys out. Personally, I don't like ZA or other application based personal firewalls, but that is more a matter of personal preference than a judgement on their effectiveness.

Reply to
"Crash" Dummy

"Leythos" wrote in news: snipped-for-privacy@news-server.columbus.rr.com:

We have a firewall in our Alpha lab to protect that environment. However, we also use our desktops as testing platforms (and why IT has been scolded whenever they have touched our desktops, and we won't participate in the company's leased plan for desktops and instead build our own). Our desktops are outside the lab network and on a segment shared with other departments and frankly I don't trust the other employees. We kepts our desktops clean with local anti-virus and firewall products when the rest of the company had to battle an infection that often originated from within (i.e., sneakernet, laptops, removable drives). I suppose we could have implemented our own firewall device but we would have had to run our own wiring separate of the hub closet which was not under our control. We also needed the exposure of a corporate network when testing our products (and had to get permission beforehand for any stress testing). At home, I don't want to manage all the other home computers so it is easier to configure the NAT router so only my hosts can talk to each other and all the home computers are isolated from me, but I suspect I could do the same using the Trusted Hosts security zone in my personal firewall.

Reply to
*Vanguard*

I'm note sure about that - we did a medical center where the office workers and the training center were on completely separate firewalls, the web server in a DMZ and the database/file server in another. Built tunnels for segments between segments for authenticated users.

While you are right, I've seen a number of companies that segregate the HR/Accounting groups behind their own firewall inside the normal users lan.

Reply to
Leythos

Use a real firewall instead of a "software firewall"?

Thor

Reply to
Thor Kottelin

Set them to listen only on a single IP, which is 127.0.0.1

Works fine with my local newsserver, for example.

Juergen Nieveler

Reply to
Juergen Nieveler

One could even argue that a "software firewall" could be an additional complexity, even a potential vulnerability, compared to using an access list on the server. More applications running is not always better.

Thor

Reply to
Thor Kottelin

Hi,

*Vanguard* wrote:

Your IT department should not trust your machines and isolate you from the other departments.

Greetings, Jens

Reply to
Jens Hoffmann

when it comes to developers, designers, people that test things (PC based), they belong in their own network segment, isolated from the rest of the company. At least that's how I've designed every development center we've built across the globe,

The Developers are almost always the worst at patching their systems, updating the AV products, and at messing up their machines.

The simple method is to install a firewall with an ANY outbound rule that permits the developers to get to the internet but not anywhere else. Install a couple of test servers in the DMZ with access inbound from the company lan to the DMZ (but not the developers LAN). Install the apps to be tested (via DVD) on the test servers and keep the test servers isolated from the development lab - maybe only allow VNC ports from the developers lab segment to the dmz segment.

With this the developers can't infect the company lan, the company lan can't reach the developers, and all testing is done on isolated test servers.

Reply to
Leythos

"Leythos" wrote in news: snipped-for-privacy@news-server.columbus.rr.com:

In some companies, Software QA (aka Alpha Test) is part of the Development group. That leads to problems with bias, internal politics within the group, lack of resources, scheduling, and conflict of priorities. Our group and resources are separate (i.e., no developers here). We are not some subdivision of Development or the developers themselves doing the testing but instead are an independent group at the same level in the corporate hierarchy. We work with [mostly] Development, Professional Services, Technical Support, Sales, and Shipping. Development doesn't get to use any of our systems in our lab, and we don't use theirs. Any files we get from them get scanned for viruses, and we do the same before we release the product to Shipping.

We don't use any of their hosts for testing. They could have (and have had) software installed for their development that lets the product run that a customer won't have. Many times they have delivered a product that works for them but not for us (and which might not work for the customer). Often I wish there was a separate Documentation group since developers are bad at producing customer-oriented documentation for their output, so we even have to run through document review at the end of alpha testing. We even submit the fully tested product to Shipping; Development doesn't release the product, we do. So we also have to then produce the manufacturing deliverables to Shipping so they can process orders, place an order and verify the order procedure as though we were Sales, ensure Shipping gets it delivered okay (a special order code stops it at the docks where we pick it up), verify the packaging and deliverables to the customer and install everything again to perform sanity testing.

I really don't know how networking is setup for Development because we don't use nor manage their hosts, and they never get to touch ours. They can take care of their own headaches. Only our desktops and Solaris boxes in our cubicles on our separate QA subnet can get into the lab. Everyone else, including Development, is blocked from our lab. Now that I think about it, I'm not sure if any of our lab hosts have Internet access so I'll have to check that. I'm pretty sure only those hosts on our QA subnet can get to the lab hosts, but those same hosts on the QA subnet do have Internet access so there could be indirect access. Development does have a few hosts physically located in our lab for security, the halogen fire system, and because of the huge UPS but they are not on our subnets; i.e., they are still on the outside of the lab's firewall. We need to know exactly what is in the box on which we test, and we can't do that if Developers use them. Sharing development hosts would lead to chaos, scheduling conflicts, especially when load testing, and would never let us recreate the same exact environment if we need to retest or requalify a prior version. In fact, we need some hosts to NOT be patched or at different patch or service pack levels so they represent the same platforms on which our customers may use our products, but those are all inside the lab. If developers used our hosts, we wouldn't even know the platform under which we are currently testing.

Except for our firewalled subnets in the lab, our desktops that also double as client-side test hosts aren't firewalled from the corporate backbone but there is a bridge (or maybe it's a layer 3 switch; don't manage it so could be a layer 2 bridge or a layer 3 switch) to keep our traffic on our side for when we do stress or capacity testing and need to include the desktops. We do have Solaris boxes in our cubicles but those aren't used for stress testing, are used only for functionality testing, and only hosts in our QA group can connect to them (although personally I would prefer to move them into the lab and use Exceed X or Reflection X to get at them there).

Reply to
*Vanguard*

In article , do-not-email@reply-to- group says... [snip]

[snip]

Rather than my include all of it, it really sounds nice, a good way to do it. I didn't see that picture from your previous post about it, and thanks for sharing. It sounds like something we would have setup in your situation too.

Reply to
Leythos

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.