MLPPP vs. Cisco CEF

For load balancing several T1's to the same provider to create say 4xT1

6mbps pipe is it better to go with MLPPP or Cisco CEF? I hear MLPPP is a CPU hog but Cisco CEF results in out of order packets. Are out of order packets really much of an issue?

These T1's feed a small ISP with a few class C's so there are a large number of connections going accross them. Thats why I worry about MLPPP being a CPU hog.

Also, with a Cisco 36xx series router what is the maximum number of T1's you can balance with CEF?

Matt

Reply to
M
Loading thread data ...

I started reading through the MLPPP RFC, but it is not easy ready. In the time I devoted to it, I did not manage to figure out how load distribution is handled.

Cisco CEF has several distribution methods available: flows can choose a pipe based upon the source IP, the destination IP, or a couple of possible logical combinations of them (such as xor'ing them); or flows can be distributed "per packet" with the packet always being distributed to the next interface in sequence. If the majority of traffic is destined for the same single IP address (e.g., the main web site) then you should model further to see which distribution mechanism would be the best idea.

It depends partly on how you choose your flow distribution for CEF. If you choose per-packet then Yes, you can definitely get out of order packets as one of the T1s might be marginally faster or less busy than another. If, though, you've arranged the distribution so that the entire conversation for one flow would normally go through the same pipe (provided that the pipe stays up the entire time), then you will not get out of order packets, as you are submitting the traffic to a single strictly-serialized T1, and T1s are private point to point links that always deliver bits in order. If you were feeding six fibres instead, with the fibres being routed links inbetween, then you would have the possibility of two packets in the same pipe travelling different routes -- which is a normal possibility for the internet.

If I understand correctly, MLPPP buffers packets so as to attempt to deliver in-order, but I might have misunderstood that point.

That might perhaps be more of a question for the ethernet newsgroup ;-) I don't know the answer myself. Keep in mind, though, that you cannot protect the entire internet: the most you can do is the best you can with what reaches you, and what reaches you might be out of order already. (Didn't Solaris transmit fragments in reverse order??)

Not so many; the 36xx series is noticably faster than the 26xx series, but it still isn't great. You might want to look at the 38xx literature. One thing about the 38xx literature is that although the line rates given appear marvelously high, the number of recommended T1 connections for each model seems fairly low. I did not see at first why there was such a big discrepancy, but I recall that a few months ago I was looking at some of the key performance figures for the 38xx and it became clear. Unfortunately I've now forgotten again :( but I seem to recall it was something along the line of comparing the packets per second performance if you assumed the T1s were loaded down with continuous streams of packets of minimum length. Anyhow, if you cross compare the performance figures to the 36xx models that might perhaps provide more realism about what the 36xx -really- support (sorry, I don't have any experience with that area myself.)

If I recall correctly, the top end 36xx model has noticably higher performance, and supports capabilities and interface types not supported in the other 36xx models; it is almost a different model line. I've forgotten now, though, how the top end of the 38xx compares to the top end of the 36xx.

Unfortunately, once you get beyond the 36xx line, you officially need to take a big jump, up to the 7600 series: everthing in between is classed as a "switch" rather than as a "router". It can be pretty hard, though, to figure out what the difference is between a "switch" that is only sold these days with a minimum of a layer 3 routing supervisor, as compared to a "router".

(I do recall finding some subtle fundamental capability differences between the Cat 4000/4500 series, and the Cat 6500 series. It wasn't an important difference to us at the time, but it could be important to some. {QoS facilities, maybe??} I could dig it out of my notes if need be.)

Reply to
Walter Roberson

MLPPP is preferred in every way if your provider supports it and if your hardware can keep up. Especially with IPSEC or VoIP, as out of order packets greatly affects performance.

I've tried both, for years. MLPPP is the way to go (up to four T1's). Any more, and you pass the point of diminishing returns.

The analogy I use is RAID 0. Would you use RAID 0 on 5 or 6 hard drives? The chance of one drive taking down the entire array is much higher than with 2 or 3.

I have two 4xT1 MLPPP bundles in my company and they work fine. I have several more 2xT1.

-Rob

Reply to
Bob

Is a Cisco 3640 going to keep up?

Do you have any sample configs of using MLPPP on a 36xx router?

Matt

Reply to
M

Here are the basics of it.

interface Multilink1 description 4xT1 bundle ip address 1.2.3.4 255.255.255.252 ip route-cache flow ppp multilink ppp multilink fragment disable ppp multilink group 1 ! interface Serial0/0/0 no ip address encapsulation ppp no fair-queue service-module t1 timeslots 1-24 ppp multilink ppp multilink group 1 ! interface Serial0/1/0 no ip address encapsulation ppp no fair-queue service-module t1 timeslots 1-24 ppp multilink ppp multilink group 1 ! interface Serial0/2/0 no ip address encapsulation ppp no fair-queue service-module t1 timeslots 1-24 ppp multilink ppp multilink group 1 ! interface Serial0/3/0 no ip address encapsulation ppp no fair-queue service-module t1 timeslots 1-24 ppp multilink ppp multilink group 1

Reply to
Bob

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.