DISCUSSION: Cisco 3-tier design: CORE

I wanted to try and prompt some discussion about the 3-tier design often implemented by Cisco VARs. I've come across a 2+yr implementation that has devolved into a bit of a mess and I'm trying to figure out some finer points. Some discussion might help think through what should and shouldn't be.

There are multiple areas that I'd like to flesh out but I'd like to just tackle the core in this thread.

The basic concept is of a core (for this customer it is two 6509s with MSFC) and several distribution blocks(typically two 6509/MFSC's also). An access unit is typically a 4006 with no routing capability and then end user switches hang off the 4006.

For this customer, the core is two 6509's with routing modules. The design threw me at first but I think i understand a reason for doing it that way. Each Core has it's own set of VLANs, with a separate VLAN for each Distribution Block.

Each Distribution block has one gig fiber run from each of the 6509 DB units. One to VLAN A, one to VLAN B. Because each DB run is in a different VLAN, HSRP does not run on the Core-facing interfaces. So each core has independent routes, doubling the size of the routing tables. There are two routes for every location, one through VLAN A, one through VLAN B. THis makes the EIGRP table very large and not terribly readable.

network 10.128.0.0 via 10.0.0.1 network 10.128.0.0 via 10.0.1.1 network 10.129.0.0 via 10.0.0.1 network 10.129.0.0 via 10.0.1.1 ... and so on for six printed pages...

I thought this design was incorrect at first, but I think it was the easiest/only way to provide 2 gb of load-balanced access across the Cores. When I do traceroutes, the routes round-robin between the two core paths.

I've always read that the Core Design was for fast switching. Now Cisco materials make "switching" a confusing word because they can mean layer 2 switching or fast layer 3 routing. When I read switching I always think within a VLAN. So my assumption had always been that cores should have a single VLAN and all DB's have interfaces on that single VLAN, which would allow HSRP and the collapsing of multiple physical routes behind a simplified routing table. THis is part of the design construct from the Distribution layer to the Access Layer, I thought it would occur at this layer as well. My thinking was that each DB would have IPs on this Core VLAN, however they would be a separate VTP. THe core wouldn't participate in any VTP. So all the DB's would share a VLAN for the cores to switch across. They wouldn't be trunks so spantree wouldn't be an issue. Maybe that is bad design--I'm not sure. For the network here, they configure each fiber interface as a VLAN with a point to point network, like you would a WAN link.

Fiber run (DB1 A to CORE A) CORE A Port 1/1 = 10.0.0.1/30 DB1 A Port 1/1 = 10.0.0.2/30

Fiber run (DB1 A to CORE A) CORE B Port 2/1 = 10.0.0.5/30 DB1 B Port 2/1 = 10.0.0.6/30

Fiber run (DB2 A to CORE A) CORE A Port 1/2 = 10.0.0.9/30 DB2 A Port 1/2 = 10.0.0.10/30

Fiber run (DB1 A to CORE A) CORE B Port 2/2 = 10.0.0.13/30 DB2 B Port 2/2 = 10.0.0.14/30

So you get dual routes everywhere, bigger routing tables, diverse round-robin pathing. But in this case, load is entirely on one of the two cores--despite the round robin path. I believe this is just one of many issues here.

I'm wondering is this the only/best design... point to point links and the Core doing layer 3 routing between them? I've read several materials on this design but none go into the specifics of the exact ip scheme, interface, routing, switching configurations. I need to understand how this is supposed to be so we can architect a path back to a functional network.

If a customer had a requirement for 2gb capacity from DB to Core is there no other way to load balance across two diverse links? HSRP certainly couldn't be used since one ip/6509/fiber must be standy. IT seems like a doubly-large routing table is unavoidable. Granted they could still do a single VLAN across both cores, but the advantage of being able to use HSRP is lost if you require 2gb capacity. BTW this customer is so far away from needing that it isn't funny! I don't think you can EtherChannel(binding ports) across two boxes. Maybe you can?

A single VLAN also allows direct connects of redundant IP devices(HSRP/load balancing VIPs/Multipathing/PIX active-failover), although we could probably debate whether a device should be a directly connected to Core. The Server Farm here is behind a FW and the firewall directly connects to the cores. However, because the redundant PIXes don't share a VLAN from core to core the failover pix was disconnected. No one understood that you can't connect IP-redundant schemes to different VLANs. They just tried to do the same thing with BigIPs.

If a redundant 1 gb path is enough, would a single VLAN across both cores be worse? I would think HSRP would have a quick recovery time and eliminate the need for 100% of the routes to failover when a CORE fails, or 100% of the routes to a DB to failover when a DB dies. COnsider the six-page routing table (another issue here) I owuld think convergence would take longer than HSRP failover, but I can't know for sure.

Any discussion points regarding design and configuration of the Core are welcome.

DiGiTAL_ViNYL (no email)

Reply to
DigitalVinyl
Loading thread data ...

just so I understand this better:

DB CORE DB |-----------|(a)--------|

4k----- L3 -----4k |-----------|(b)--------|

a couple of things: (1) Doing p2p vlans is not a bad thing (2) EIGRP - Well, simply put - that was the easiest thing to do..no thought needed there.. (3) This is fixable..

Thoughts: You should use channel'ing HSRP from the client to the DB switches/routers EIGRP might be ok still, but I prefer to have better policy control. You should use another routing protocol for segementation if that is what you want to accomplish (and or reduction of routing tables). Remember routing protocols are just a means to an end..so believe it or not, BGP is a good fit if you want to play nice nice w/ the routing table, Yes BGP in the enterprise..not really a bad solution (actually most of the features coming out are gen'd around doing this in the enterprise).

OK...so my .2cents (with assumptions the above is correct)

  1. Run 2gig channeled uplinks to the core from the DB, to meet your
2gig requirement..plan for growth and go 4 if you have the capacity.

-channel the 2 or 4 gig uplinks to core (keeping the L3 p2p)

  1. Cross connect the DB switches:(optional)

-example: DBa to CoreA and DBb to CoreB (multi gig channel), etc.. note: this is just my preference because I like to have multipathing from a single box. But remember your gig uplinks to the core should be a 2gig+ channel. FYI: BGP multipathing through diverse paths is supported...but a long topic.

  1. Run BGP (DB1(a/b) is AS=x, Core is AS=y, DB2(a/b) is AS=Z)

-Peer to loobpacks would require an IGP, but in your case it is not really needed..so just do BGP peers to the physical/vlan interface.

-keep eigrp running between the two DB switches (IGP)

-Advertise specific networks through BGP to the core

-if you want to really reduce the table..just ship a default route from the core as an option

-ibgp peer between the 2 core devices, leaving EIGRP there

  1. ok..this is getting long and all is predicated on BGP...if you want to ping me do so michael.s.wiley @gmail - (i think google filters emails on this list..so i 'white spaced' the email).

Design (1)- my preference

DB1(a)(AS 65000) --- CORE(AS 65001) --- DB2(a)(AS 65002) | | | DB1(a)(AS 65000) --- CORE(AS 65001) --- DB2(a)(AS 65002)

Design (2)

-cross connect vlans from DB's to Core (as you stated), (w/ a trunk on the core for all vlans applicable). You can use HSRP as well at that point, since you would have the 2gig + channel'd uplink..

Rgrds, Michael Wiley CCIE 13271

Reply to
miwiley

Rednudant diagram can get really confusing. This MIGHT be clearer (LOL)

CORE-A----------DB(A)\ | | \_______ / | | \ | | _______X | | ====4k | | / \ | | / CORE-B----------DB(B)/

This is the model of a single Distribution Block being attached to the core. This same model would be replicated out for as many DistBlocks as you needed to add.

It is a textbook Cisco design. However I question how well the finer points have been implemented. Each CORE has a fiber to EACH DB, The COREs have connects to each, as do the DB's. THe 4k access layer is connected to EACH DB.

THe DB block defines a VTP and should do the L3 routing for every VLAN defined within its VTP. The Core...at least in the network I am working on is doing L3 routing between the different DBs that get attached to it.

If you are talking about the DB to access layer, that is already done, but I'm trying to focus solely on core design, since there are other points to discuss when looking at DB and ACCESS layer design.

Well they don't have that 2gb requirement here. You see I'm trying to reverse engineer the mess heap back to what a Cisco VAR and CCIE designed. SInce they didn't create a single core VLAN, you CAN'T do HSRP on the DB to COre interfaces. No choice there. I tried to come up with a reason why they would make that design choice and load balancing across the two links was all I could come up with. The highest traffic level I've seen across the gig links is about 125mbps, and it is my suspicion that 80 mbps of that traffic was misrouted and never should have left the Dist. Block. This customer has no need for more than 1 gb. Infact the only time I saw a 450Mbps load on a single fiber was during a spanning tree loop.

I can't channel across multiple boxes, can I? I don't see how that could work or be configured. If ...

CORE A --- gig fiber to --- DB A

CORE A --- gig fiber to --- DB B

CORE B --- gig fiber to --- DB A

CORE B --- gig fiber to --- DB B

I can't channel those. I could of course create multiple fiber runs to channel.

CORE A --- gig fiber to --- DB A CORE A --- gig fiber to --- DB A

CORE A --- gig fiber to --- DB B CORE A --- gig fiber to --- DB B

CORE B --- gig fiber to --- DB A CORE B --- gig fiber to --- DB A

CORE B --- gig fiber to --- DB B CORE B --- gig fiber to --- DB B

Cross connects are there already.

Do remember this is a model of just a single distribution block. They have 3 or 4 of these and will eventually break a super-size block down into more manageable blocks later. This is a 12,000 node network with

5 locations.

How can you use HSRP if all the fiber interfaces are configured as point to points? There is no common subnet to create an HSRP vip? That's the critical decision that someone made that I'm trying to decipher. WHy create two point to pont subnets per DB, per CORE when it negates the use of HSRP? the onlly reason I can see is to get the benefit of load balancing. But the complexity of the routing tables and the confusion for lesser engineers to torubleshoot it isn't worth it in my opinon.

DiGiTAL_ViNYL (no email)

Reply to
DigitalVinyl

First let me give you some Cisco links that I found very informative on this subject matter. You may have read some of these already.

formatting link
formatting link
formatting link
formatting link
formatting link
I don't fully understand your network configuration, I would need to see a network diagram and some device configurations. That said, I have a few comments.

The primary benefit of equal-cost routes is the redundancy and quick convergence after a failure. No calculations are required when one of the routes goes down. Load-balancing is also a benefit, but it is only useful if you need the bandwidth (many networks do not).

The term routing and switching no longer applies to the logical process being performed. Instead, it refers to the physical hardware that is performing the calculations. Routers use a CPU and switches use ASICS (application specific intergrated circuits). Of course switching is many times faster than routing.

If you need a solution like HSRP but want load-balancing, look at GLBP. Note that there are hardware requirements and limitations. Only certain supervisors support this feature. (I do not recommend HSRP in the core. Most likely it will be used on the access side of the distribution switches.)

Finally, how big of a problem is a large routing table? I am sure the

6509's can handle it. If you are worried about convergence time, the equal cost routes allow the network to converge quickly without a re-calculation.

Good luck,

Tristan Rhodes

Reply to
Tristan

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.