2 4500s vs 1 6500 w/dual supervisor cards

My company is contemplating Catalyst switches for our network (housed in a datacenter) upgrade. We are a mid-sized shop-- approximately 50 Sun servers (from ultra5 all the way up to SunFire V440). But there is the potential of the number of servers doubling within the next year. Currently we have two Cisco3500s into which all servers are connected. All servers have a 100mb connection into just one of the two switches (except for a sun blade server that gets a 1000mb connection). There is an etherchannel (400mb) trunk between the 3500s. We would eventually like the capability to run 1000mb to all servers, with multipathing (each server having two connections to a switch)

For the upgrade the choice is currently between getting two 4500s each with a single supervisor card or one 6500 with dual supervisor cards. Robustness is more important than failover time in this decision, but failover time is still a consideration.

I'm leaning toward the two 4500s solution. I just like the robustness of having 2 different chassis. However, I have heard the opinion that the 6500 solution might be more robust, since the likelihood of a supervisor failure is much greater than the likelihood of an entire chassis failure.

Does anyone have any thoughts on the 4500 solution vs. the 6500 solution? Any input is appreciated.

Thanks, Mark

Reply to
mop10011
Loading thread data ...

In article , wrote: :My company is contemplating Catalyst switches for our network (housed :in a datacenter) upgrade.

:For the upgrade the choice is currently between getting two 4500s each :with a single supervisor card or one 6500 with dual supervisor cards. :Robustness is more important than failover time in this decision, but :failover time is still a consideration.

:I'm leaning toward the two 4500s solution. I just like the robustness :of having 2 different chassis. However, I have heard the opinion that :the 6500 solution might be more robust, since the likelihood of a :supervisor failure is much greater than the likelihood of an entire :chassis failure.

I learned a fair bit by reading through Cisco's White Paper, "IP Telephony: The Five Nines Story"

formatting link
The document is oriented around telephony, but it has explicit figures on reliability and MTBF for various 650x related components.

The least reliable part that they list that you might have in your configuration is the the SUP1A-MSFC2 (one of their T1 cards is a bit less reliable.) The chassis is about 9 times as reliable as the SUP1A. The only part they list as being more reliable than the chassis is a WS-G5484, which is the SX GBIC. (Hmmmm, we've had an average of one of those GBICs fail per year...)

Reply to
Walter Roberson
[Thanks for that link Walter, interesting... more at end]

To the OP:-

You don't say what sort of performance you need however the single chassis solution may offer better performance since there will not be any possibility of the link between the switches getting in the way.

Take care that you understand how Etherchannel does it's load balancing if you need over

1Gb between switches.

I feel ti depends basically on how much you trust Cisco.

If you trust them, follow their advice and get the big iron.

If you feel happier with two nice warm chassis in your comms room then go that way. I guess you should consider what sort of maintenance you are going to buy.

[Back to 5x9]

"Cisco IOS Software GD Release 11.1 is 99.99986% reliable with a MTBF of 71,740 hours and that the non-GD version of Release 11.1 is 99.9996% reliable with an MTBF of 25,484 hours. Both individuals also showed that software maturity was a major factor in software reliability"

ROFL. 11.1 - IP telephony?? 6509?? !!!!!!!!!! Laughing so much it hurts, too, as well.

It says Copyright 2005 at the bottom so I assume that Cisco are seriously basing their estimation of IP telephony solution reliability on 11.1 software behaviour.

For those not so close to this, 11.1 has approximately zero 'features' when compared with the current software. This simplicity seems likely to me to result in better reliability than would be the case when the software has more features. Telephony!!!

Seemed entertaining to me, but what do I know?

Reply to
anybody43

4507 / 4510s support dual supervisor as well.

You need to remember that the 2 chassis are optimised for different uses -

6500s are designed for higher thruput as 4500s are designed for clients PC connections - for example each slot on a 4500 is limited to 6 Gbps total - which isnt that much is you use 48 port 10/100/1000 ports and hook up some heavy duty servers. 6500s can use the cheaper blades so that a specific config has comparable limitations to a 4500, but it can go much faster - there are a lot of tradeoffs to watch for.

Dual supervisor handover on fault is probably themost complex bit of any chassis switch - so in turn is likely to be one of hte most difficult bits of the code - so may be contributing to the reliability figures.

i prefer dual chassis - i have seen several issues on various gear from Cisco, Nortel, Avaya and others where the complicated resilience causes more faults than it fixes.

1 thing to watch - if you build in any resilience, (even just dual power, which is the one thing that always seems to improve resilience without much pain), then you need some monitoring tools that actually watch for problems and get used.

Otherwise something goes wrong that resilience "fixes", and it all falls over when the 2nd fault hits - so all you did was spend more money and delay that crash for a while.

Reply to
stephen

Don't forget that depending on how you design your network, you can use the second chassis in a different rack, room or data centre to gain some HA in the event of environmental disaster. Also, make use HSRP, failover NICs in servers, etc. etc.

If you absolutely need the port density/speed, or any of the fancy blades (FWSM, IDSM-2, etc) go the 6500, otherwise the two 4500's are better IMHO.

M.

Reply to
Mark Lar

Dual chassis's running HSRP is more reliable than a single chassis with dual-supervisors. Looking simply at MTBF for the hardware components is only half of the failure scenario, the other half is software failure. I have over 30 6500's and 40 or 50 4500's in my network and have never had a hardware failure of a supervisor card or chassis on either platform. I have had blades go out occasionally, but I have more software problems than I like. Both systems are very, very reliable, but software is much more likely to have problems than hardware. With dual chassis, running HSRP is the most reliable solution you can put together. The 6500's do have much higher throughput than the 4500's but with only a hundred servers or so and most running 100/Full you should be fine with the 4500's, just use the new Sup V, and dual power supplies (connected to two different PDU (fuse panels)). By far, AC power-supplies are the single largest failure componet.

A word of caution, you CANNOT run hardware based load-balancing (NIC teaming) using two NIC's when they are connected to two different chassis. When running on the same chassis you need to put them into a port-channel to work properly. Port-channels between chassis is not supported.

Scott

Reply to
thrill5

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.