medical records, web server, & stateful firewall vs packet filter

Hi,

Looking for opinions about the following situation:

Our customer runs a medical imaging service. There are three components: web server, image server and SQL server. The web server needs to be publically accessible over the Internet. The web server needs to be able to access the image and SQL servers directly (the image server link in particular needs to be >1Gbps because the images are so large). The image and SQL servers need to be accessible from the Internet only via VPN.

My plan so far is to bond multiple 1Gbps NIC's on the web and image servers and connect them via etherchannel on a Cisco 3750. The 3750 would act as a packet filter between the servers. The SQL server would attach to that too. Then I would set a Cisco ASA 5510 between the 3750 and the Internet, to terminate VPN connections as well as provide stateful firewall and maybe some application filtering services for the webserver.

My question at this point is: am I making a mistake by placing a stateful firewall between the webserver and the Internet? Maybe a simple packet filter would be less prone to DoS attacks. I could stick a Cisco 2800 there instead. I have always believed that a stateful firewall device like a PIX or ASA 5500 would offer better overall protection than a packet filter (I need to limit access to the image and SQL servers too), but some feedback I've received recently is causing me to question this assumption.

Anyone care to point me in the right direction?

TIA, Adam

Reply to
netlist
Loading thread data ...

Don't know enough about your specific application to make a concrete recommendation, but would suggest you consider the following...

You are dealing with medical records, which means you must meet all HIPAA security requirements, which means that when you are penetrated, you will need to be able to stand up in court and demonstrate that you took adequate precautions. Keep in mind that the definition of adequate is a function of both time and the sensitivity/value of the information being protected.

Denial of service attacks only affect your revenue stream, they do not (typically) have the ability to put you out of business permanently. They are also indistinguishable from being _VERY_ successful.

It is highly likely that your image server and SQL server are far more vulnerable to attack than your web server, the more so to the extent that you focus your design attention on pure speed (filters and parameter verification are rarely considered part of performance tuning).

It is much easier to design security into an application than it is to add security to an inherently insecure design. The latter is often impossible even within a finite budget.

Note that I am assuming that you have already taken into account in your design the need to protect against network and server failure, disaster recovery, and other more common sources of down time.

Good luck and have fun!

Reply to
Vincent C Jones

Vincent,

Thanks for the reply. As an added gotcha, this customer wants to run NetBIOS over TCP/IP between the web and image servers, which I think is a big no-no, even with a packet filter in place between them. I'm afraid as you say that the application is being developed for speed but that some important security considerations may get neglected with this approach.

The only way to get into the image / SQL server would be via the web server, as the 2800 or ASA 5510 on the perimeter would deny access to the image/SQL servers from the Internet except via a VPN connection. One of the reasons I was considering the ASA 5510 was the application filtering aspect. As a hacker could only connect to the webserver via ports 80/443, perhaps the ASA could identify and deny non-standard data streams which represent attempts to hijack the webserver for image server attacks.

I appreciate your suggestion to focus on designing security into the application. Customer's working on that at this moment; if you have any tips, they're welcome (I see on your website that you're familiar with limitations in TCP/IP and NetBIOS architectures).

Best regards, Adam

Reply to
netlist

if the images go "out" to someone, then you need Gb/s Internet capacity as well - so the big piece is a hosting problem rather than networking.

/Cop out Clause/ - this is free advice from someone you dont know over the Internet - so you are getting exactly what you havent paid for.........

i work for a company that among other things hosts big web sites (not in my area - i worry more about electronic plumbing).

You dont mention just how critical this is and how much you think you can spend - so this may be way over the top (or not)

standard paranoia architecture is for 2 "layers" of servers segregated by firewalls.

Internet -> f/wall1 -> load balancers -> web servers -> f/wall2 -> sets of backend servers

(with a few variations)

add parallel stuff to required resilience / performance level.

You also need other connections to support this - links to backup servers (via the firewalls again - someone might hack into your backups rather than into the production stuff if it is easier), management / control / telemetry networks (via the firewalls for similar reasons). If it helps our hosting designers cheered when Cisco started supporting more than 6 to 8 interfaces on a PIX.

the 2 firewall layers should run different s/ware - the idea is that a major attack needs to get to the back end servers and has to get thru 2 different firewalls. if you are really paranoid you apply similar logic to servers, management tools.

backend servers may need to be firewalled from each other as well as the front end - depends how much compartmentation you need.

If the firewalls come from different suppliers and use different s/ware etc, then they are unlikely to have an identical vulnerability, and since you then have to configure them in different ways it is harder to leave the same "hole" in both in the config......

firewalls are about protecting data, intrusion detect / prevent etc is about knowing when they arent working properly.

if you are serious about this you probably need similar smaller scale test setup for new changes, development and so on. it also means you can do some testing without compromising the live data set. And then there is testing before it goes live so you can prove it is secure (or more likely the 1st time that it isnt).......

none of this is useful unless you look after the installation once it is live, so installing firewalls, IDS etc is only useful if the operations team can handle the info they get, keep them up to date and generally give the system ongoing care and attention.

i would ignore what you need at layer2 until you decide which expensive bits you need to glue everything together, since layer 2 / 3 switching is (relatively) cheap.

for example the obvious choice for a Cisco firewall to run at N * Gb/s is a Cat 6k firewall services module (last time i checked the module without software was $35k list, and you might want more than 1) - if you go that way put your ethernet ports in the same box. The FWSM can go at around 5 Gb/s on a good day and handle up to 100 logical interfaces, and it can pretend to be multiple firewalls - which helps keep the rulesets manageable. You can get similar thruput on Nokia and Netscreen / Juniper boxes.

Also N * gig parallel pipes gets painful very quickly as your bandwidth needs go up - so maybe you should think about 10G plumbing on trunks. Also N

  • Gb/s may not be feasible on a single server - so maybe load balancers and multiple parallel servers to get the throughput (and /or any replication needed for resilience)? Needing to connect up using numbers of 10G ports again pushes you towards Cat 6k.....

Mind you the number of systems that really need to go at Gbps speeds is a small fraction of the number where somebody says "and of course it must need multiple Gbps throughput"......

packet filters are not best practice unless you have a brain the size of a planet. Even then, someone has to maintain them later.... Anyway you get much better logging on a real firewall for when you have to find out what happened at some point. (this is personal opinion - there are people / bigots of every persuasion in this area of networking just like in all the others).

assuming you are in the US - HIPAA?

no idea what that does to your requirements but Vincent made some points about having to prove you have worked to best practice rather than just do it. So, if this is the 1st time you will do this, i would want some professional / expert help - if nothing else it spreads the blame (or if lucky provides assurance that you have chosen an appropriate level of security)......

i suspect you may have to worry about a periodic security audit at some point as well, so everything and its dog needs to be documented, formal testing, change control, paper trails blah blah....

if all this sounds expensive then it is - the rule of thumb is that "real" security always cost a lot more than you expect (this comes up whenever we need another person for the inhouse security team - they believe it). The side effects (such as inconvenience, vetting, separation of security teams from server / data ops teams and so on) increase cost even more. So you may want to go and find someone to do some of that for you.

i would split SQL and image servers into different firewall zones. You might have to do less image protection (or use a much more expensive firewall) - but SQL bandwidth will be relatively low.

also you need to think about what you protect - does an image help a data thief without the info from SQL to tell them what they are looking at - i.e. does a penetration need to get to both back end server types, or just 1?

finally if security is important then you dont let the s/ware engineers dictate what flows across the architecture - the security team tells them what they are allowed to push around and they work inside that.

the comments about netbios over IP make me think no one has even thought about security yet.

The bit you havent mentioned is the VPN. If you want encrypted data and access at Gb/s thruput then that is another big problem.

If the VPN isnt sized for those kinds of thruput (or the "customer end" doesnt have that kind of connection speed) then you dont need such high speeds between the servers - the VPN will act as a bottleneck.

Reply to
stephen

Stephen,

thanks much for your extensive feedback, which is much appreciated.

Best, Adam

Reply to
netlist

Not a black and white issue... but it does severely limit the extent to which the network can reinforce application security.

This would appear to be the crux of the challenge. Amazing how often people have to learn the hard way how difficult it is to retrofit security into an existing application (and even that makes the unproven assumption that the application can even have security retrofitted into it at finite cost in finite time).

This does not address a wide range of attack vectors, such as invalid SQL queries in valid HTTP(S) requests. You seem to be assuming that as long as the web server is not compromised, there is no way to use it to attack the database servers. This too has been proven to be an invalid assumption in typical web server with database back end designs.

Get a competent security person into the design team, the sooner the better. The emphasis must be on competent. Management buy in to the importance of security is equally essential, otherwise the normal concern for features and schedule/cost will override any real security concerns.

Good luck and have fun! And if necessary, have your resume ready because you may find it necessary to resign (or walk if you're a consultant) rather than sacrifice your reputation by endorsing inappropriate security.

Reply to
Vincent C Jones

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.