I wanted to try and prompt some discussion about the 3-tier design often implemented by Cisco VARs. I've come across a 2+yr implementation that has devolved into a bit of a mess and I'm trying to figure out some finer points. Some discussion might help think through what should and shouldn't be.
There are multiple areas that I'd like to flesh out but I'd like to just tackle the core in this thread.
The basic concept is of a core (for this customer it is two 6509s with MSFC) and several distribution blocks(typically two 6509/MFSC's also). An access unit is typically a 4006 with no routing capability and then end user switches hang off the 4006.
For this customer, the core is two 6509's with routing modules. The design threw me at first but I think i understand a reason for doing it that way. Each Core has it's own set of VLANs, with a separate VLAN for each Distribution Block.
Each Distribution block has one gig fiber run from each of the 6509 DB units. One to VLAN A, one to VLAN B. Because each DB run is in a different VLAN, HSRP does not run on the Core-facing interfaces. So each core has independent routes, doubling the size of the routing tables. There are two routes for every location, one through VLAN A, one through VLAN B. THis makes the EIGRP table very large and not terribly readable.
network 10.128.0.0 via 10.0.0.1 network 10.128.0.0 via 10.0.1.1 network 10.129.0.0 via 10.0.0.1 network 10.129.0.0 via 10.0.1.1 ... and so on for six printed pages...
I thought this design was incorrect at first, but I think it was the easiest/only way to provide 2 gb of load-balanced access across the Cores. When I do traceroutes, the routes round-robin between the two core paths.
I've always read that the Core Design was for fast switching. Now Cisco materials make "switching" a confusing word because they can mean layer 2 switching or fast layer 3 routing. When I read switching I always think within a VLAN. So my assumption had always been that cores should have a single VLAN and all DB's have interfaces on that single VLAN, which would allow HSRP and the collapsing of multiple physical routes behind a simplified routing table. THis is part of the design construct from the Distribution layer to the Access Layer, I thought it would occur at this layer as well. My thinking was that each DB would have IPs on this Core VLAN, however they would be a separate VTP. THe core wouldn't participate in any VTP. So all the DB's would share a VLAN for the cores to switch across. They wouldn't be trunks so spantree wouldn't be an issue. Maybe that is bad design--I'm not sure. For the network here, they configure each fiber interface as a VLAN with a point to point network, like you would a WAN link.
Fiber run (DB1 A to CORE A) CORE A Port 1/1 = 10.0.0.1/30 DB1 A Port 1/1 = 10.0.0.2/30
Fiber run (DB1 A to CORE A) CORE B Port 2/1 = 10.0.0.5/30 DB1 B Port 2/1 = 10.0.0.6/30
Fiber run (DB2 A to CORE A) CORE A Port 1/2 = 10.0.0.9/30 DB2 A Port 1/2 = 10.0.0.10/30
Fiber run (DB1 A to CORE A) CORE B Port 2/2 = 10.0.0.13/30 DB2 B Port 2/2 = 10.0.0.14/30
So you get dual routes everywhere, bigger routing tables, diverse round-robin pathing. But in this case, load is entirely on one of the two cores--despite the round robin path. I believe this is just one of many issues here.
I'm wondering is this the only/best design... point to point links and the Core doing layer 3 routing between them? I've read several materials on this design but none go into the specifics of the exact ip scheme, interface, routing, switching configurations. I need to understand how this is supposed to be so we can architect a path back to a functional network.
If a customer had a requirement for 2gb capacity from DB to Core is there no other way to load balance across two diverse links? HSRP certainly couldn't be used since one ip/6509/fiber must be standy. IT seems like a doubly-large routing table is unavoidable. Granted they could still do a single VLAN across both cores, but the advantage of being able to use HSRP is lost if you require 2gb capacity. BTW this customer is so far away from needing that it isn't funny! I don't think you can EtherChannel(binding ports) across two boxes. Maybe you can?
A single VLAN also allows direct connects of redundant IP devices(HSRP/load balancing VIPs/Multipathing/PIX active-failover), although we could probably debate whether a device should be a directly connected to Core. The Server Farm here is behind a FW and the firewall directly connects to the cores. However, because the redundant PIXes don't share a VLAN from core to core the failover pix was disconnected. No one understood that you can't connect IP-redundant schemes to different VLANs. They just tried to do the same thing with BigIPs.
If a redundant 1 gb path is enough, would a single VLAN across both cores be worse? I would think HSRP would have a quick recovery time and eliminate the need for 100% of the routes to failover when a CORE fails, or 100% of the routes to a DB to failover when a DB dies. COnsider the six-page routing table (another issue here) I owuld think convergence would take longer than HSRP failover, but I can't know for sure.
Any discussion points regarding design and configuration of the Core are welcome.
DiGiTAL_ViNYL (no email)