Folks: Basically, I'm trying to understand the importance of "root switch". Is it really necessary for the "high end" switch to be root switch? I accidently ran into a situation where I have an access switch as a root swtich for one Vlan instead of the preferred core switch.Things seem to be working and I'm stumped why we didn't see any network slowness. While I'm in the middle of the CCIE certification, I figured this was a good way to learn more about spanning tree.
I have a fairly simple network, with a potential for growth and hence the need for proper STP configuratoin. I'm trying to understand the worse case scenario for a misconfigured "root" switch. I understand that I can force the root switch to be the core switch by changing the priority and also understand the use root gaurd protection and other techniques.
Here is what my network looks like:
Switch (M) Core switch and also desired STP root, VLAN 20,30,40 ) / | \\ switch A(VLAN 20) switchB (VLAN 20) switch C (VLAN 30)
Now it turns out that becuase of lower MAC ID, switch A become my root for VLAN 20.
Here is what my STP diagram looks like for VLAN 20.
Switch A (root) DP | RP Switch M (Core) DP | RP Switch B
Things seem to be working with switch A as the root switch. Where should I look for spanning tree misconfiguration/bottlenecks?
What is the worse case scenario with having switch A as the STP root?
as far as i know , the reasons why cisco recommends a higher end model to be the root switch (and a centrally located switch) is just processing power. u also need to consider what version of STP you are using, some versions are more processor intenstive than others. sh spanning-tree summary (will tell u which STP version/mode) sh proc memory | inclu Span (will tell u how much memory is being used by STP) some STP modes aren't available on some switches. you should also consider this.
spanning tree tuning only really matters if you have loops - the whole point of spanning tree is to remove backup links from the topology until you have a fault. you choose a switch near the core of the network so that the key links that should carry traffic are forwarding - i.e. the default tree should closely echo the default network topology.
since your topology doesnt show any loops (if i am reading it correctly) - then all links will be forwarding, so any switch acting as root gives the same set of links in the forwarding state.
traditionally there was another reason - to stay inside the STP hop count on all paths. Since most modern networks only have a couple of layers of switches, this is less of an issue.
not a lot. you need a bigger network before it makes a practical difference.
1 thing you havent mentioned is how many spanning trees you are running - sounds like at least 2 (1 per VLAN).
FWIW i prefer a design that doesnt use spanning tree to suppress loops at layer 2, since some faults (like a 1 way fibre link) will end up with circulating broadcasts and a packet storm.
better to have no spanning tree loops, resilience at layer 3 and just have spanning tree there to "fix" any problems like accidentally patching 2 switches together....
Thanks to everyone who responded and explaining the concept.
Stephen: I have 8 vlans and core directly connected to all the access switches.About 3 vlans had incorrectly configured root because of lower mac address.
BTW, I didn't understand the last two paragraphs of your response. Looks like you have something really valuable to say but I'm not reading it correctly. What do you mean by "Design that doesn't use spanning tree" and "reslilience at layer 3"?
classic spanning tree is inherently fragile, since loss of BPDU packets arriving on a port is taken to mean that a suppressed loop in the bridge network should be turned on.
this works well in practice most of the time - until you get a bug, a heavily overloaded device that cant send / recieve often enough.
Or if 1 break a fibre or UTP link in 1 direction, spanning tree may cause the far end to start sending packets to "heal" the break. But if the other direction is still forwarding, your network converts into a broadcast packet replicator.
1 way around this is not to have layer 2 loops. Routing can cope with dual paths forwarding in parallel, so 2 layer3 switches in the centre provide resilience. You use layer 3 switches to dice your network into multiple small layer 2 subnets, (maybe only 1 chassis or stack of switches).
each VLAN / spanning tree is confined to small subnet on a couple of switches rather than smeared across a set of switches in parallel with all the other VLANs.
think of a dual centred star, with each edge node being a layer 2 switch with 2 uplinks, and the 2 hubs being layer 3 switches. a spanning tree covers the edge switch, and 2 uplinks, but doesnt go anywhere else, so no loops to suppress.
Good spanning tree design goes much deeper than just taking into consideration the CPU power of a switch. It is important for traffic/bandwidth management, and deterministic path selection. Once you are past 200 vlans in an environment, you should seriously consider something other than PVST. Something like MST would be much better suited.
That beig said, looks like you network is small enough to not make much of a difference.
As far as setting up a layer 3 based hub/spoke design, it becomes impractical and un-manageable in a large environment. Don't go down this road if you expect significant growth. Plus it will limit your connectivity at the access layer to only those vlans for which that "spoke" was setup for.
Not yet mentioned specifically (IIRC) is that traffic tends to go from the source towards the root and then away from the root to the destination. Imagine a squirrel moving between two arbtrary branches of a tree. On average the above logistics are true although there are many exceptions e.g. the case of simple trees (Cactus). With the Access switch as root core traffic can and up flowing through the access layer. Not good at all.
Hmmm. Cisco seem to disagree. The current design recomendations (for large environments) seem to be to do L3 right out to the access switches with no L2 loops _at_all_. Zero.
PVST, MST forget the whole lot:-)))
The Access Layer broadcast domains each consist of a VLAN on a single switch and its two uplinks.
This even applies to Wireless roaming. I read recently that each user gets their own L3 (GRE ?) tunnel back to the wireless concentration point. This allows roaming without changing the IP address of the end station.
I am from the data center environment. So speaking from this perspective, I would never create one subnet per access layer switch and then route that traffic. It seems absolutely absurd and much more management required. It wouldn't be the first time I have seen Cisco recommend something that isn't necessarily the best solution. Cisco's goal is to sell every access layer switch with an upgraded Layer3 image = more $$. Plus it forces you to go to an IOS based access layer switch thus furthering their mission to eliminate CatOS.
My question is, my data center uses redundant connections to redundant access layer switches. If the recommendation is to setup L3 links between the access layer and distribution, then do I have to setup HSRP between the access layer switches? Also, how do I move a server from one cabinet to the other without changing its IP address? And now, do I have to setup new subnets for each access layer switch? So each cabinet gets its own unique subnet for application traffic, then a new subnet for backup traffic and yet another subnet for management traffic. Then another subnet to make the L3 connection to the distribution. Four different unique subnets per cabinet multiplied by a modest 100 cabinets per distribution and that makes one hell of a nightmare for even a medium sized environment. Where do you put the access lists? Instead of maintaining an access list per vlan on the distribution, you might still be able to get by with only doing access lists at the distribution, but now you would have to do one per access layer switch.
One thing I do agree with, Cisco would like you to eliminate the dependancy on spanning tree all together if they can get you to. That sounds great, but I think it is impractical in a redundant server farm environment. On the other hand, I have seen it work well in a desktop environment - L3 at the access layer, just like you are describing. Do you have a link to that design recommendation? I would be interested in seeing what they are recommending.
Transit networks are used all of the time. They are extensively used in large enterprise and service provider networks. However, you don't create transit network for access devices...more on this in a moment.
It wouldn't be the first time I have seen Cisco
If you tune the routing protocol timers, you can get subsecond reconvergence. You also don't have to deal with the arcane nature of STP. For example, issues surrouding assymetrical L2 paths that can cause unicast flooding of traffic when certain paths become unavailable. Even with RTSP, certain path failure conditions can be challenging to deal with and failover time still takes several seconds (IIRC, 6 seconds to detect that a neighbor is no longer available).
No, you can also consider gateway load balancing protocl (GLBP)
Also, how do I move a server from
Again, you wouldn't create transit subnets for your edge devices. Transit networks are utilized to handoff between different network modules. Backbone to distribution is a good example.
Cisco has been pushing the Enterprise Composite Network model. In your example, your server farm would sit in the "Server Farm Module". You would bring that into an appropriate switch, L2 or L3. Your farm could be on a single subnet if desired. The server farm module would interconnect to the Campus Backbone. The Campus Backbone could be composed of other modules such as "Building Distribution" and the like. Then, the backbone hands off traffic to other modules like the Internet module, VPN/remote access, WAN, etc, all via the "Edge Distribution".
Not that I've drank the Cisco kool-aid too much, but I have found this expanded network modularity very useful when adding new services to my network, particularly remote access and VPN integration. While it may be good for Cisco in terms of more equipment possibilites, there is no rule that says you can't combine modules in certain equipment configurations. The model provides for the things you're questioning, such as ACL placement. Well, in this model, you'd place ACL's between the appropriate module and not on every interface everywhere. You have built in context for your ACL's since they are placed in relation to module function (and, they're likely to be more compact). You don't have an uber-ACL that does everything on high-traffic interfaces.
The model allows you to scale the network without continuous redesign. It also lends itself to better security methods, management, and fault isolation/troubleshooting. Spanning tree has its place, but frankly, I think it's safe to say that it doesn't belong in the core. I think this is especially true today as enterprise networks are carrying time/delay sensitive traffic such as voice. A good example of why sub-second convergence is needed can be found in VoIP. Without extremely fast convergence, phone calls will get dropped. I believe this is the main reason why you're seeing the push towards a L3 core. Unless your traffic mix is email and websurfing, STP is just too darned slow! :-)
Again, I've found the Enterprise Composite Network model very useful in the real world.
Hey Send me a link! I want to see more. Without transit subnets, I'm not sure I'm understanding the proposed setup.
I agree, that is why I thought we were talking about the access layer, not the core.
Let me say that we may be in favor of the same thing. I'm specifically speaking of the connection from my access layer switches into the distribution that IS my server farm. Not how my server farm connects to the rest of my network. With several hundred servers all sharing the same subnet or vlan, split across 60 access layer switches, I can't see the benefit in trying to seperate them via L3. Although, I'm always open to seeing new ideas! Is there a link to a drawing or proposed architecture for a server farm environment?
Excellent link, thank you. Looks lik the Cisco recommendation for L3 access layer is more geared to a campus model, not the data center. This is what I was suspecting. I will continue to read up though!