# Large network question

• posted

I have a question regarding one network I am supposed to build and administer. It should be put in a building that has 23 floors, of which around 10-15 will be connected. Every floor will have 20-50 computers connected, which are separated by walls. Each of the wall-separated groups include 4, 8 or 10 computers and there are at most 5 groups per floor. There is a possibility of devices like printers to be connected, but that is not important right now. The computers will be primarily used for file-sharing, possibly some gaming (Counter-Strike-alike games). Some servers might exist on the network (HTTP, FTP, mail server), but that is not a requirement.

I have built small networks with around 10-20 computers, but never a network of this size. I am sure that there are some problems that I am not aware of, so I would like to explore this in depth. Does anybody have links to some tutorials on the Net about how to make such network? It should address possible problems that do not happen (or are not a problem) in small networks.

Here is a solution that I am thinking of. I would put enough 8-port (for small computer groups) or 12-port switches (for bigger computer groups) on the floor, depending on how that floor is about to be connected (not every computer should be). I would put two or three switches on some floors which have 16 or 24 ports, so every of 10-15 floors would be connected to them directly. A scheme would be:

Legend: [] - Group of 4, 8 or 10 computers n* - n-port Switch (e.g. 12* - 12 port switch) MFS - To some of the main-floor 16- or 24-port switches

Example floor (with 26 computers): [4] [8] [10] [4] | | | | 8* 12* 12* 8* | | | | MFS---/ | | | | | | | \\---------/ | \\-----------------/

All other floors would be connected in the same way, so MFSs would have say 40-60 places occupied in total. Some of MFSs would be used as a central gathering point:

MFS1---\\ | MFS2---+--- *(possibly)* ---Router----Internet | MFS3---/

It would be possible to connect this network to a broadband Internet link, using a router connected to MFS2, as pointed. What do you think about this? Any problems with this? I am concerned about the possibility of a broadcast storm. Is this a problem? How could be this avoided e.g. by using routers? As I found out, using routers would prevent users from using file-sharing (at least using Windows machines). Is this true and what is the solution?

Help is VERY appreciated.

• posted

IMO use managed switches so you can see what your clients are doing.

Plan for growth on each floor. Plan for secure space on each floor. If all you need is a switch, you can get small wall-mounted racks with lockable doors for office space. Put a small UPS in the cabinet, just in case.

Put a gbe switch in a closet on a middle floor and backhaul all floor switches to that with GB ethernet or fiber. This obviously requires floor switches with GBE interfaces. Consider a fiber ring if downtime is unacceptable.

Don't pick a switch that's so small that you need more than one each of your floors in day 1.

A professional switch brand will be "stackable" so you don't need another GBE pull from a floor that has expanded and requires a second switch for growth.

Standardize on a family of switches.

It doesn'T matter where you put YOUIR dsl connection from a performance standpoint. The bandwidth it uses is zip compared to the

100MB/GBE pipe.

While you are doing your budget for the wiring, plan on WiFI, even if you don't tell your boss or actually install AP head ends. Survey the space for WiFi. Maube it will just be conference rooms, but pull a CAT5 cable to each location that needs an AP. An AP location doesn't need 120VAC power becuase you can power it over over the ethernet cable via pOE, as needed.

Use professional APs. Cisco makes a very nice wall mounted unit with anti-tamper locks.

The cost of a few extra cable pulls for future WiFi (or whatener replaces it) will be zip if they are done as part of a big wiring project.

• posted

I think you need outside help a network that big is going to need VLans and sort out your STP

going from small 20 pc networks to serious porjects like this is not a good idea going form my Cisco couse expeiance if your working from scratch (ie no wireing) its well into 5 figures for the budget

erm 23 floors of with 40-60 hosts per floor man you so not going to do internet access with a consumer bb link

Does the building have structured cabeling? allready

Asuming the floors arent that big (ie all pcs are with 50 metres of the wiring closet) you sould be able to cable all the pc's back to a central point (the floor Telephone room in there go 1 or 2 24port cisco or procurve switches.- dont forget to alow for telephone's

thease TR switches are conected (usualy by fiber) that runs down the building core down to the main TR in which you have your core switches which will usualy be bigger and gigabit versions of the floor switches also in this room goes your routers and all the othe gubbins.

The Course books CCNA sem 1 & 2 from cisco do give some guidlines

• posted

snipped-for-privacy@yahoo.com wrote in part:

This sounds like 400+ computers. More a job for a network architect. There are a lot of questions about the client that matter: cost vs downtime sensitivity. Security?

This sounds like a three-level architecture with SoHo level equipment. Cheap, but lots of work. Two-level would be more usual for this number: one or two 48 port switches on each floor. All tied into the top level (perhaps not on the main floor since 23 floors is pushing 100m, and perhaps with fiber). Always leave spare ports.

You will have to estimate the usage. I doubt 3/512 would suffice for you, especially if there's a lot of outbound email.

Broadcast storms are certainly a possibility if you have little control over the computers connected. Then you need managed switches than can kill broadcast.

-- Robert

• posted

I think what you need to do is work out the logical structure of this system in some detail before you work out the physical structure. With the number of users you seem to envision, trying to work by peer-to-peer file sharing, unless there is some very unusual circumstance that you have not explained, is going to be chaotic at best.

• posted

I missed that bit. What kind of "peer" networking are you dowing. Windows file sharing doesn't scale and having departmental workgroups would suck, and you've got few to non for access control administration.

If you have *any* sort of inside file server, applications or user administration you *really* need a WIndows server or the Samba equivalent for control and labor saving. If you don't know why, you're in over your head.

• posted

is there existing wiring - if not the fit out may cost a lot more than your network. do you need other systems such as phone wiring?

try to work out where the wiring will go, and if you should prewirefor moves inside the building.

make sure whoever puts it in labels everything (and gets the labels accurate)

how many laptops and other transient devices will be there?

this can easily double the number of ports needed.

Each of the wall-separated

dont leave this stuff out - printing can use more bandwidth than file sharing....

and if you need enough printers to cope with 15 floors, you are probably going to want some central stuff like servers to run it from.

The computers will be primarily

you need to think about what network structure you need - should there befile sharing across all floors, or do you want to isolate the groups?

there are a bunch of best practice bits in various places. try

for some (expensive) ideas.

you should try to use gear designed for enterprise networks - LEDs to show you when ports work, GigE uplinks, IP management so you can log onto a switch to do diags without running up and down stairs all day.

more ports - you always need more ports. Try to use the same kit most places - that way you can keep a spare or 2, for changes or faults.

check what you can do with the wiring - it may be feasible to have 1 wiring closet with cables serving 2 or more floors to allow you to concentrate equipment in fewer places (although fault finding gets more complicated). If the bulding is already wired properly you probably have to use rack mount switches - which usually come in 24 port lumps these days.

A scheme would be:

you want as few layers of devices as practical - 1 or 2 central switches (2 lets you have 2 resilient central points), then 1 switch per floor. when someone complains about the cost of these switches - work it out in \$ per PC........

you want spare cables between floors - running vertical stuff in 24 floor building isnt something you do in a spare 10 minutes - even if the landlord will let you at intermediate floors since you are sharing the building.

central stuff connects to the central switch (for example any servers).

central stuff - virus scans / updates / email / name services / DNS / DHCP / ?

think about ongoing maintenance - assuming you standardise PCs and they get replaced every 3 years, you will swap one out every 2 days, and probably rebuild the disk or whatever once a year per device..... 4

00 PCs means 2 or 3 staff just looking after the PCs - and that is if you standardise, keep with consistent builds and do all the other things to cut the work load that the users are going to hate you for.

central virus, PC configs, Ghost for rebuild images, central windows patch distribution, backups, internal web - it all ends up on servers.....

you are going to want a fairly meaty Internet connection and some enterprise style hardware to drive it - 400 users will kill a consumer router.

probably you will need a firewall and a fairly fast internet feed to get reasonable performance.

Any problems with this? I am concerned about the

could be - a lot depends on how / if you lock down the user devices. A worm outbreak could saturate your switch only network.

Is this a problem? How could be this

Best way to do this is to use a layer 3 switch as the central switch - that way you can dice the network into different subnets with only 1 device.

but - then this is single point of failure which affects all users - so that and on duplicating critical central servers is where you should spend any money you have for resilience...

As I found out, using routers would

No. but you would need to design the way the various bits talk to each other.

Having said that - central servers are much better when you get to this scale, since they dont have users running games on them to crash them, shut them down and so on.

File sharing will work fine across a layer 3 router (not a consumer box doing NAT), but Windows name broadcasting only works on a single subnet out of the box.

if you are a microsoft shop, then you probably need a central server or 2 to handle dDNS / Active Directory and so on.

• posted

Everything is relative. To many here, this is not a large network.

Better way to do it, assuming this network is going to be in place for a few years, reliability is important, and you have a decent budget:

Get decen MANAGED switches on each floor, so that you have the ability to do Spanning Tree (don't want any loops bringing the net down!) and run fiber uplinks back to a small core switch. You might HAVE to run fiber anyway, you can't do cat5 over 100m (or about 300 ft) with ethernet.

Get a decent core switch with layer 3 functionality. That way you can segment off the floors onto their own subnets if you so desire, and do your routing at the core. A decent small core switch will more than handle the load. No need for the

File sharing certainly CAN be done on windows between networks, assuming you run a local DNS server that has all the PC's names in it, or run a WINS server. If you are not familiar with these concepts, you are in over your head. Hire a pro.

• posted

I've read all the replies to date and they are all good, but there's one important question I have. What business has 15 floors and 400 computers and running a game is a big consideration? Is this a game development company? :)

• posted

• posted

RR> This sounds like a three-level architecture with SoHo > level equipment. Cheap, but lots of work. Two-level > would be more usual for this number:

Out of interest, how deep can you nest this stuff? I knew the old 5-4-3 rule for 10base2 and (hubbed) 10baseT, but is there a limit to the number of (layer 2) switches data can hop between nodes?

- Andy Ball

• posted
5-4-3 is only for non switched lans

and I think you mean fibre vertical backbone

• posted

The 5-4-3 rules is even wrong for 10baseT. It is only for coaxial and FOIRL, which is all that existed before 10baseT.

For 10baseT the limit is 6 segments and 5 repeaters, based on interframe gap loss. You can't exceed the distance limit without a lot of transceiver cables, rarely used on 10baseT nets.

-- glen

• posted

GH> For 10baseT the limit is 6 segments and 5 repeaters, > based on interframe gap loss. You can't exceed the > distance limit without a lot of transceiver cables, > rarely used on 10baseT nets.

Thanks, I have written that down. What are the limits if switches are used in place of the hubs? How does 100baseTX differ in this regard?

- Andy Ball

• posted

If you are using switched 10BaseTX or 100BaseTX or gigabit, and your intra-switch distances are within range (100 metres, 30m for some forms of copper gigabit) then the 5-4-3 and other similar rules no longer apply. Those rules are per segment and in fully switched networks, each link is its own segment.

For modern switched networks, the next limitation is the RFC limit on spanning tree diameters, which is spec'd at 7.

However, the RFC spec of 7 for the spanning tree diameter was based upon worst-case timings for segment lengths and switching times. As switches are much faster these days, then especially if your segment lengths are well below the maximums, then in practice you could probably get away with a noticably larger spanning tree diameter.

• posted

WR> For modern switched networks, the next limitation is > the RFC limit on spanning tree diameters, which is > spec'd at 7.

Up to seven switches between nodes, or seven segments with six switches between them?

- Andy Ball

• posted

If I understand correctly, no device actively participating in spanning tree is to be more than 6 hops away from any other participating device.

Each participating device might become the root device as far as the algorithm knows, so it is not "6 hops from the root": the information from each has to be able to reach each of the others within the time budget.

I did not phrase the above in terms of "switches" and "nodes" because "switch" and "spanning tree participant" are not always identical.

But as I indicated earlier, the -real- limit is the spanning tree timers, and 7 was derived from those timers based upon the time budget limit per switch and segment: if the situation is such that you are sure you can get the packets through in time, then in practice larger diameters are usable.

• posted

The 7-switch limit applies only if you use the default timers in Ethernet switches which implement the Spanning Tree Protocol, the original, described in 802.1D-1998. Which means, even with STP, that limit is not hard. It can be exceeded safely if you configure the switches properly. A larger spanning tree requires more time to stabilize, and that's why the timers need to be adjusted.

However, with 802.1D-2004, the text in Clause 8, which used to describe STP, has been removed, and the content was superseded by Clause 17. This is the Rapid STP (RSTP) (which used to be described in 802.1w).

RSTP is not sensitive to those STP timer values any longer. So although the figures used in Clause 17 don't show more than 7 switches in a spanning tree, you won't find any definite limit mentioned. And I think this was quite deliberate. I think one of the goals of RSTP was to allow for much bigger catenets to be created. Not just to heal an existing catenet faster.

Bert

• posted

AM> ...you won't find any definite limit mentioned. And I > think this was quite deliberate. I think one of the > goals of RSTP was to allow for much bigger catenets to > be created...

I think I preferred the old days of 5-4-3, when you could at least know what the rules were to break! :-)

- Andy Ball

• posted

You still do, they're just not quite so simple. If you don't have a copy of Charles Spurgeon's "The Ethernet Book" you might want to obtain one--he gives both simplified and calculated rules for the major variants that were available at the time the book went to press.

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.