Multiplexing and packet loss

Hi

I have a 1gbit (1000BaseZX) link between two sites that's, due to large number of small packets, have alot of buffer related packet loss in one direction. A solution I'm planning to implement is CWDM on this link and team at least four interfaces into an etherchannel. I'm however unsure if this will actually solve any of the packer loss problems. In theory four interface buffers should be better than one but I wounder if anyone have any real world data on this. Will a CWDM etherchannel only move the buffer problems from the interface to the etherchannel buffer?

Regards Fredrik

Reply to
Hoffa
Loading thread data ...

One thing to consider is the path selection algorithm used by etherchannel. The options vary across cisco platforms and range from dest mac whereby say all traffic to a particular router uses the same path through src/dst mac and also IP addresses. I have the idea that some platforms can use TCP/UDP ports but I forget.

Depending on the nature of the traffic you may get no load distribution at all in the worst case. There is no option to balance traffic depending on the individual link loads.

If you can change to 4 routed links there may be a per packet load balancing regiem available that will be supported at full speed. Beware though that some per packet load balancing schemes enormously affect the throughput (factor of more than 10). Behaviour depends very much on platform.

Another alternative may be some sort of packet prioritisation (QoS).

If you provide details of the platform in use, software version, output that shows the problem, then perhaps someone will come up with a some suggestions. Oh, and the nature of the traffic. e.g. Voice will mostly have small packets.

Reply to
bod43

What kind of hardware are you using on this link? And if it's 6500 platform - what kind of line card?

Regards, Andrey.

Reply to
Andrey Tarasov

How did you come to the conclusion that the packet loss is due to a large number of small packets? Even if that were the case, I very seriously doubt using CWDM to multiplex this into 4 interfaces would solve the problem and would create other issues. You probably just need to do some simple buffer tuning.

Post the output of "show version", "show interface" and "show buffers"

Reply to
Thrill5

you need to understnad the loss mechanism.

If you are running out of bandwidth, then CWDM may help.

if the device driving the link cannot cope with the number of packets, then giving it more bandwidth to drive is likely to make it worse.

So equipment, traffic profile and details?

Reply to
Stephen

Thank you for the input. I'll give as much technical info as possible. I've done some packet sniffing on the link and it's easy to see the number of 64byte packets coming in floods on the interface. The source of the packets is a server application cluster located at both ends of the link and they are sending updates back and forth and the out on the Internet. Switch: 6513 sup720-3B Line card: WS-X6516A-GBIC

Thrill5: What kind of buffer tuning might provide a solution? I was under the impression that one should leave outgoing buffers and queues alone.

Regards Fredrik

Reply to
Hoffa

The default buffer allocations are fine 95% of the time, but on some links you need to adjust them because of your specific traffic, especially on high bandwidth, highly utilized links. A "show interface" and "show buffers" will make it very obvious if buffer tuning is needed. The fact that you have lots of small packets makes it very likely that you need to increase the default buffer allocations. If you post the above requested outputs I can give you recommendations as to how to increase the buffer sizes.

Reply to
Thrill5

According to

formatting link
this line card has only 1MB buffer per port. Recommended replacement - WS-X6748-SFP or WS-X6724-SFP - has 1.77MB on egress, so it's 6 of one, half a dozen of the other.

Question is - are you experiencing tail drops in egress queue or fabric drops? Can you post show interface? Number of output drops and its ration to total traffic is most interesting here.

Nothing can be done here - it's hardware based queuing. Thrill5 most likely assumed 7200 or similar platform.

Regards, Andrey.

Reply to
Andrey Tarasov

also check the inbound queues.

you can also get issues on input, since there is buffering needed between the blade in the switch and the forwarding engine.

i agree the 6724 is a better blade to use, but mainly because it uses the fabric rather than the shared bus, so it wont contend with other traffic on the bus.

the fabic tap gives a 20 Gbps channel shared by the 24 GigE ports - which doesnt sound like much over subscription.

However, the cisco hardware wraps every packet in extra control info as it crosses the fabric link - so esp with min size packets the useable bandwidth is only maybe 70 to 75% of that.

Reply to
Stephen

formatting link

Yup, your correct!!! A 3750 Metro switch would be a good platform to use for this application.

Reply to
Thrill5

I suggested a 3750 Metro switch because this switch has two gig L3 interfaces (like a router interface). Gigabit switching interfaces on a

6500 don't have the same queuing and QoS capabilities and a true L3 routed interface does. Since you don't need this capabilites, I'm still puzzled as to why you are seeing traffic drops. Are you seeing output drops or input drops?

formatting link
>

Reply to
Thrill5

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.