C3845, Dual Hub Dual DMVPN Hub-To-Spoke, Limitations?

Hello everyone,

we are currently designing a IPSec secured DH-DD 'hub-to-spoke only' DMVPN based on two C3845 routers, using EIGRP as the dynamic routing protocol, especially for providing the ISDN backup. The bandwidth per hub is about 30 Mbit/s.

Has anyone deeper experiences with this setup?

In several documentations (available under

formatting link
one can see, that cisco recommends only

350 spokes per mGRE-tunnel on the hub and two mGRE-tunnels per hub. It is also said, that this should be a limitation of the routing protocol.

For our design, which must be able to handle nearly 1500 C836-spokes with 1024/128 kbit/s ADSL lines this is far too less.

I wonder, why this limitation is so low and which part of the concept is responsible for that?

The spokes should be configered as "EIGRP stubs" and should only receive one summary-route from the hub. So routing traffic and -load should be at a minimum.

Unfortunately I have read some documentation, that said "route summarization" would not be possible in this scenario with DMVPN? (Why) is this the case? We only want Hub-to-Spoke routing, not spoke-to-spoke, so I do not understand this restriction.

Beside this, does anyone have some more information about "DMVPN Phase

3" and when that will be released?

Thanks in advance for your comments and hints,

Dennis Breithaupt

Reply to
Dennis Breithaupt
Loading thread data ...

Hi Dennis,

The largest currently deployed customer base for DMVPN is 3000 spokes as of June 21, 2005.

Cisco is working on customer deployments for networks with more than

10,000 spokes.

Cisco's DMVPN Phase 3 is a set of features that will improve scalability, stability, and manageability of DMVPN networks.

These features are:

  1. Increased DMVPN cloud sizes with routing and scalability enhancements.

  1. Enhanced QoS for traffic shaping per spoke.

  2. Better support for multicast.

  1. Better management support.

To learn the exact availability dates of these features please email Cisco directly at:

dmvpn-core

at

cisco.com

Sincerely,

Brad Reese BradReese.Com Cisco Repair Service Experts

formatting link
Hendersonville Road, Suite 17 Asheville, North Carolina USA 28803 USA & Canada: 877-549-2680 International: 828-277-7272

Reply to
www.BradReese.Com

[...]

Hi Brad,

thank you for you reply.

What you are writing, is a quote from cisco's FAQ about DMVPN at

formatting link
But do you have some experiences with this setup by yourself?

One does not know, if there where 3000 spokes on one single hub or for example distributed over a daisy chain or a cluster with a load balancer as it is described here:

formatting link
think it is unlikely that they mean 3000 spokes at one hub in this context.

I especially want to understand what is the reason for this 350-spoke limit on one mGRE-interface that is also mentioned in the FAQ and if that has something to do with EIGRP and route-summarization.

Thanks, Dennis Breithaupt

Reply to
Dennis Breithaupt

Hi Dennis,

Cisco featured DMVPN in their ASK THE EXPERT Series late July and early August 2005:

Welcome to the Cisco Networking Professionals Ask the Expert conversation.

This is an opportunity to learn with Cisco expert Haseeb Niazi how to deploy dynamic multipoint VPN solutions using multipoint GRE (mGRE) and next hop resolution protocol (NHRP) with IPSec to enable zero-touch deployment of large scale VPN networks.

Haseeb Niazi is a solutions engineer at Cisco Systems Inc. with over five years of experience with network based security services. He holds a masters degree in electrical engineering. He has presented to both internal and external audiences at various conferences and has represented Cisco in a number of customer events.

His current focus is on testing scaling and performance of large scale network based security services.

formatting link
?page=netprof&CommCmd=MB%3Fcmd%3Ddisplay_location%26location%3D.1dd8eec9

-------------------------------------------------

You may find this exchange of interest.

Replied by: gkleffner - Aug 5, 2005, 6:38am PST

What are the current recommendations for the number of spokes that a hub router can support? Specifically for EIGRP. Will scaling be addressed in future versions of IOS? Also, how can priority queuing be applied to a tunnel interface? It doesn't appear to be supported.

Replied by: jimmy-dotson - CCIE - Aug 5, 2005, 7:13am PST

Spokes (the number of supported tunnels, anyway) are platform dependent, I'm pretty sure. We use 7200s with the original VAM, which according to our local Cisco guys will scale to about 300-400 tunnels. Not sure about the VAM-2. There is also the 6500 IPSec VPN Service Module, which supports 8000 tunnels.

formatting link

Also, we run our remotes as EIGRP stubs when possible - cuts down on route queries significantly. Unsure if there is a published number of acceptable neighbors - we could not find this info last year when converting to EIGRP.

Finally, QoS is different but the same for GRE tunnels :) It is different from the standpoint that you apply "qos pre-classify" to your crypto maps and tunnel interfaces, and "service-policy out" to the physical interface (where the crypto map is applied). It is the same in regards to class-map and policy-map and acl (or whatever matching your doing: DSCP, etc.) config.

formatting link

hth!

Replied by: hniazi - CCIE - Aug 5, 2005, 12:16pm PST

Good response with good explanation. Couple of clarifications:

1) VAM, VAM2 OR VAM2+ do not matter in case of EIGRP spoke scalability.

2) 6500 can terminate 8000 tunnels but if running DMVPN natively on

6500, EIGRP is handled by MSFC and the EIGRP spoke limitation is 400-450.

3) For increased scale, you can use 6500 to terminate IPSec and handover MGRE/EIGRP processing to MWAM card or a number of 7200s connected to 6500 like in a server farm environment.

formatting link

4) If you only have single EIGRP hub (no daisy chaining), Stub will definitly be very helpful. If there is a neighbor on the hub MGRE tunnel that is not a Stub (say another daisy chained hub), hub will start querying all the spokes whether they are stub or not.

Replied by: jimmy-dotson - CCIE - Aug 7, 2005, 9:50am PST

That's excellent info - 400 to 450 neighbors is news, but very helful news, to me...

Do you have a URL or book reference on number 4, for further study? We have 5 EIGRP neighbors on a 7200 (internal LAN), and 130+ neighbors (as stub, we hope :)) via the IPSec GRE tunnels - we would like to verify they will in fact remain stubby (without debugging on production network - additional reading would be great place to start).

thx!

Replied by: hniazi - CCIE - Aug 7, 2005, 8:35pm PST

In your case, non-stub neighbors on the lan shall not cause issues with stub neighbors on the MGRE tunnel. Essentially there shall be no non-stub neighbor on the MGRE tunnel interface because as I understand it, the limitation is on per-interface basis.

I got the info directly from the dev. and do not have URLs or books with this info. I will try to find a reference if I can.

Replied by: jimmy-dotson - CCIE - Aug 8, 2005, 5:42am PST

Perfect - makes more sense described this way!

Sometimes you get caught up trying to make more of thngs than what is actually there :)

'preciate all of the information.

Replied by: hniazi - CCIE - Aug 5, 2005, 10:03am PST

We have tested 7200 as a headend. Recommended number of EIGRP spokes is

350-375. We are currently working on trying to make EIGRP more robust and trying to find the bottlenecks in the DMVPN scenario. Nothing concrete yet and DEs are looking into it.

One of the items we are also looking for future releases is tagging the a packet with some qos group ID and then apply QoS on it. On the tunnel you can provice the "qos pre-classify" command and apply the service policy on the physical interface. It applies QoS on traffic before encapsulation.

-------------------------------------------------

Hope this helps.

Brad Reese Worldwide Cisco Repair

formatting link

Reply to
www.BradReese.Com

[...]

[...]

Hello,

thank you. That clarifies things a bit.

As a conclusion, I get out of the discussion, that 350 to maybe 400 spokes is maximum by design for EIGRP, even if that spokes are STUBs, but nowone knows why ;)

So two things are left unclear in my thoughts:

1) If a design limitation by EIGRP is, that only ~350 neighbors are possible, why should 700 neighbors be possible, if distributed over two mGRE-Interfaces? Where is the logic in that? For the EIGRP-engine it sould be no difference?

2) The predecessor of the to be designed network is a current design with approx. 400 C836 as EIGRP STUB neighbour devices on one C3845 terminating approx. 400 point-to-point GRE-tunnel. EIGRP is only exchanging some HELLOs; no queries, no update, due to the use of only one summary-route and EIGRP stub. The CPU load of the system is less then 10%. So, where is the limit with 350 "spokes", when in this scenario the CPU is idle most time (besides doing some routing and encryption)?

I've checked out different design papers on the cisco page about EIGRP

formatting link
the document about the advances in EIGRP development
formatting link

There is nowhere a reference to such a limit, so what's the reason for it?

Can someone bring further clarification to this?

Parallel we've opened a request at CISCO TAC, hoping to receive some information from their own next days.

Thanks in advance,

Dennis Breithaupt

Reply to
Dennis Breithaupt

there is a big impact on the processor load for running a dynamic routing protocol - and the biggest hit is traditionally maintaining adjacencies with another router - the more you have the bigger the load.

also you are running IP over IP, which is going to be more complex for the router than simple forwarding.

it is worth asking cisco if a different box can offload some of this processing to an accelerator - 7301 maybe?

i have seen similar deratings for central routers before - using other routing protocols, and other manufacturers. i would expect EIGRP /OSPF / BGP to only support 10% or less of the number of peers using static routes.

the main limit is not processing when everything is working, but when there is a big change in the network. Stubs help a bit with that - but the main load still appears on the central router.

i dont think there is a limit due to the protocol - i think it is due to resources in the router.

i am guessing, but if you split the load over 2 or more mGRE interfaces and assume that only one does major reconvergence at a time, then there are only updates to process and forward from the sources on that mGRE. Or maybe you get some load balancing, with only 50% of the tunnels "active" at any 1 point.

1 thing that may be better design is to split tunnel termination and main WAN link, so that you can have several central routers sharing the load. After all - even if Cisco suddenly says "ok - it works with 1000 tunnels" - there is always going to be a scale point at which you hit the central box limits....

maybe you are attacking this the wrong way. Cost of a 3845 is $13k, so say you pay that for a discounted box with extra options. So - central router cost per supported site = $13k / 350 or approx $40 - probably around 1 months rent of a broadband link at the other end of the link, or 5 to 10% the cost of the far end router. so is the cost of several extra 3845s a big deal in your overall project? you probably want a design with TAC support, and maybe underwritten by Cisco - one of the things they tend to do is to make sure there is plenty of headroom - since that makes the system more stable and reliable, so less hassle to support.

that way you can rack up multiple 3845s as processing engines and build a big enough farm to do the job. If there are multiple boxes all the same, you can keep a "hot spare" in case you have a failure - and a failure hits fewer remote sites.

formatting link
especially the document about the advances in EIGRP development

formatting link

Reply to
stephen

1500 * $300 + $26,000 = $0.5M

You mention TAC support, however my understanding is that for such a deployment Cisco will give you Pre Sales design support.

Get yourself an account manager and a Systems Engineer. It may take a few phone calls however I would guess that Cisco or a Partner will take on pretty much all of the design work and will effectively underwrite the design.

Reply to
anybody43

Cisco sales offices in Germany:

formatting link
Cisco sales offices worldwide:

formatting link
Sincerely,

Brad Reese Cisco TAC Contacts Worldwide

formatting link

Reply to
www.BradReese.Com
[...]

Of course you're right with that. The design support is even already on the way...

However, a partner was already involved since the beginning of the project, but this very special EIGRP-design limit was unknown for that partner and even for our Cisco systems engineer at the first shot.

In fact, it seems hard even for Cisco itself to tell, why and where this limit of 300 spokes for one mGRE-Interface exists.

But to give a bit feedback to the group:

We probably will change the design to use two 72xx as SLB and a farm of some kind of 38xx/26xx/28xx as VPN terminators, as pointed out in Ciscos "Large Scale DMVPN deployment"-presentation.

So we can use the same configuration on all hubs, (nearly) the same configuration on all spokes and can scale dynamically by increasing the pool size of hub-routers.

Thank you all for you help,

Dennis Breithaupt

Reply to
Dennis Breithaupt

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.