atm interface outpktdrops

Dear all, could i get views on the issue of outpktdrops being reported when i do;

show int atm0/0

as described below;

InPktDrops: 0, OutPktDrops: 11488/0/11488 (holdq/outputq/total)

from my (relatively uninformed view) this does not seem ideal.

as i follow the cisco ref (troubleshooting atm packet drops), i note from;

show queueing int atm0/0

Queueing strategy: fifo Output queue 0/40, 11488 drops per VC

it seems we have something configurable in this respect for the pvc in question;

vc-hold-queue 4-1024 (default = 40)

my view seems that from this that the router is dropping packets presumably to the 'bit bucket' ! prior to transmission on the interface on account of what seems to be a relatively low default vaule for hold-queue length.

Thanks

Reply to
GT
Loading thread data ...

Wherever a fast link (e.g. 100Mbps ethernet) meets a slow link (e.g. ADSL) then TCP will inevitably drop packets. This is how it is designed to work.

There is a possibility that the drops are due to some malicious activity however that cannot be determined from the information provided.

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1112 3875750 packets output, 363797767 bytes, 0 underruns

So I have 1/38,000 output drops.

This is WAY, way, less that I expect.

1/300 is perhaps OK.

Note that these drops are due to the *designed* behaviour of TCP.

Reply to
bod43

Thanks for note back. I was trying to understand the impact of implementation of this configuration, presumably an increase in memory used to store the additional 'buffer'.

Point noted re malicious activity, but it could also be legitimate traffic, and my thought was that for a relatively low impact change, we could reduce the amount of TCP retransmission, which while built in to the protocol, has to be better if we can avoid ?

Reply to
Graham Turner

First you have to understand that adding additional buffering to a slow outgoing interface isn't necessarily a good thing.

It may reduce the number of packet drops, but it will increase the latency under load. When there is some flow that tries to saturate the output (e.g. a long transfer), the increased number of buffers will actually decrease the usability of the other users of the connection.

When you know the speed, the average packet size, and the latency you are willing to accept, you can calculate a number of buffers and maybe decide to increase it.

But understand that it is generally unwise to just add buffers because it feels good to have few packet drops. As mentioned by others, packet drops are part of the flow control mechanism of TCP.

Reply to
Rob

I would put it more strongly. It is probably a bad thing:-)

TCP hosts gather and exchange information on the state of the link (whether it appears congested) on a continuous basis. If you stick a big packet buffer in the path you disrupt this communications by introducing delays to the 'messages'.

On a congested link, dropped packets are *the means* by which TCP detects the congestion. TCP cannot modify its behaviour accordingly if the congestion is hidden from it by un-naturally large buffers.

I can see where you (original poster) are coming from but in my view you are probably mistaken.

If you want to read more search for terms like [TCP congestion window]. The congestion window is distinct and seperate from the "advertised receive window".

TCP is one of the smartest things I have ever seen. There is a *lot* going on under the hood, don't mess with it unless you know a lot more about it than I do would be my advice. BTW - I don't know much about it, just enough to know when to leave well alone:-)

Reply to
bod43

It's been a while since I've configured ATM PVC's, but I do remember a situation where the number of output drops was really high. It think had something to do with OAM being enabled on side, but not on the the other. Can't remember the command to enable/disable this feature on the PVC.

Wherever a fast link (e.g. 100Mbps ethernet) meets a slow link (e.g. ADSL) then TCP will inevitably drop packets. This is how it is designed to work.

There is a possibility that the drops are due to some malicious activity however that cannot be determined from the information provided.

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1112 3875750 packets output, 363797767 bytes, 0 underruns

So I have 1/38,000 output drops.

This is WAY, way, less that I expect.

1/300 is perhaps OK.

Note that these drops are due to the *designed* behaviour of TCP.

Reply to
Thrill5

You are dealing with tail-drop - FIFO queue gets full and any packets arriving at this moment get dropped.

Keywords to type in search bar on cisco.com -

Per-vc queuing CBWFQ

Just use

class-default fair-queue

for a start.

Some tweaking of transmit buffer (tx-ring limit) might be required. It depends on combination of platform, IOS version and interface type. You want to have enough buffer for two full MTU packets.

You will still have output drops, but performance should be much better (if it's not limited by something else).

Regards, Andrey.

Reply to
Andrey Tarasov

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.