Cisco recommended Switch setup

Hi All,

I have heard that Cisco does not recommend daisy-chaining the Catalyst switches, but I don't know if that's only using X-over cables or what.

We've got 3548-XL-EN switches with the GBIC modules, and I've been daisy-chaining them with those, but I still don't know if that's the proper setup.

For example, suppose you have 1 switch that all of your servers are patched into, yet have 6 other switches going out to people's workstations that need connectivity....what is the recommended solution to connect all of those?

(Obviously, if you only have 2 GBIC modules, you can only go direct into

2 other switches. So if you daisy-chain and one in the middle dies, then you're screwed. But that's why we have extra, I guess...)

I don't know much about fiber technology...is this where something like a fiber panel would come in to the picture?

Any links for reading material on this would be appreciated as well. (Yes, I've searched Google but I don't know that I'm phrasing the question correctly) :-)

Thanks!

Reply to
Mike W.
Loading thread data ...

The proper setup is the setup that works for you. You would make Cisco very happy if you bought a gigabit core switch to link all your edge switches together, but if what you've got now delivers what you need, then why mess?

It depends what kind of load your network experiences.

You probably don't want to daisy chain. If you can, bring all your switches together into a gigabit switch.

Fibre is only necessary in specific circumstances where copper won't do - for example, runs over 100m, runs that share a duct with power cabling and runs that pass through electrically harsh environments.

'core switch', 'edge switch', 'access switch' and 'distribution switch' spring to mind. These are components of what Cisco call the 'enterprise composite model'.

Reply to
alexd

Whole-heartedly agree. You design around your requirements and your budget, but two very recommended standards are to avoid daisy-chaining (connect all the switches to a central 'core' switch), and ensure that you have some kind of tiered bandwidth model (10 or 100 to desktops,

100 or gig to servers, and gig or multi gig to core, and gig or multi gig to backbone). Optimally you would have two cores which all switches are 'homed' to, but in this case, that kind of redundancy may be overkill and probably definitely not feasible from a budget perspective. At the end of the day, you have to ask yourself or your client, how important is redundancy? What kind of business loss would an outage of the network yield? If the business is a processing center where technology directly impacts income, then perhaps two cores and full redundancy is key. If its a small business and a loss of email/file/print for a few days would be horrible, but not directly impact income, then perhaps its a different story. At minimum, I would encourage at least a core, and trunking back to the core from each switch. A cheap solution could be getting an extra 3560 and running gig copper to save on hardware and cable costs. Or, if this isn't that bad, get two and redundantly connect everything. You have many options, but I would get away from daisy-chaining at all costs...the rest is up to you and your requirements/constraints.
Reply to
Trendkill

I agree with previous poster that you should avoid daisey chaining. My advice would be to get two core switches. It will help in troubleshooting, upgrading and adding more switches if there is a need. Generally speaking, if the switches are co-located, you dont need fiber. But in your case, I would recommed fiber uplinks since 3548 only have two gig ports which are fiber. A 3508 could be a cheap soultion (i believe it is end of sale). Here are a couple of links to design guides from Cisco's website:

formatting link
Good luck

Reply to
java123

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.