Giant packets on a 10Gig interface

Hi,

I have two Cisco 6509 chassis running IOS version 12.2(17a)SX1 each with a Sup 720 and a 4 port 10-Gigabit Ethernet module:

6500-1#show mod Mod Ports Card Type Model

--- ----- -------------------------------------- ------------------ 1 16 SFM-capable 16 port 1000mb GBIC WS-X6516-GBIC 2 4 CEF720 4 port 10-Gigabit Ethernet WS-X6704-10GE 3 48 SFM-capable 48-port 10/100 Mbps RJ45 WS-X6548-RJ-45 5 2 Supervisor Engine 720 (Active) WS-SUP720-BASE 6 16 SFM-capable 16 port 10/100/1000mb RJ45 WS-X6516-GE-TX

Mod Hw Fw Sw

--- ------ ------------ ------------ 1 2.0 6.1(3) 8.2(0.56)TET 2 1.3 12.2(14r)S5 12.2(17a)SX1 3 1.1 6.3(1) 8.2(0.56)TET 5 3.1 7.7(1) 12.2(17a)SX1 6 2.6 6.3(1) 8.2(0.56)TET

Mod Sub-Module Model Serial Hw

--- --------------------------- ------------------ ------------ ------- 2 Centralized Forwarding Card WS-F6700-CFC SAD074701VB 1.2 5 Policy Feature Card 3 WS-F6K-PFC3A SAD08030CFA 2.1 5 MSFC3 Daughterboard WS-SUP720 SAD08030C29 2.1

6500-1#show run int t2/1 Building configuration...

Current configuration : 226 bytes ! interface TenGigabitEthernet2/1 description Link to other office no ip address udld port aggressive mls qos trust dscp switchport switchport trunk encapsulation dot1q switchport mode trunk switchport nonegotiate end

They are connected to each other using a TenGigabit Ethernet port over single mode dark fibre. The fibre run is a couple of miles long and supplied by a local telco. Only one of the 10Gb interfaces is being used on each switch.

The 10Gb interfaces on each end of this link are reporting a large number of giants when I do a show int (when I looked the other day it appeared to be around 150Mb/sec, right now, late on a Saturday night it's around 4Mb/sec). They are the only interfaces that are reporting giants.

If I look at the interfaces with SNMP I see no errors or discards.

6500-1#show int t2/1 TenGigabitEthernet2/1 is up, line protocol is up (connected) Hardware is C6k 10000Mb 802.3, address is xxxx.xxxx.xxxx (bia xxxx.xxxx.xxxx) Description: Link to other office MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Full-duplex, 10Gb/s input flow-control is off, output flow-control is on ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:41, output 00:00:57, output hang never Last clearing of "show interface" counters never Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 154000 bits/sec, 122 packets/sec 5 minute output rate 770000 bits/sec, 492 packets/sec 16609486843 packets input, 7935718529298 bytes, 0 no buffer Received 175730057 broadcasts, 0 runts, 3545951873 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 input packets with dribble condition detected 22436196006 packets output, 17205205668690 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out

So, are these giants anything to be concerned about or are they just a red herring and any idea why I only see them on the ten Gigabyte interfaces?

Reply to
DC
Loading thread data ...

Giants are frames that are larger the the MTU configured on the interface. In this case, your MTU is 1500 and you are doing dot1q Vlan tagging which added a 4-byte tag to each frame and make it larger than the MTU configured. Your switches are forwarding these frames and just report them as giants (not dropped) so I wouldn't worry too much. You can change the MTU size on the interfaces to a larger size if you wish.

Doan

Reply to
Doan

Doan said the following on 16/03/2008 12:14 AM:

Thanks for responding. What you're saying does make sense.

One of my 6509 switches has access ports (for servers) and trunk ports. So, if 1500 byte frames are coming from a server then that would explain the giants on one of the 6509 switches.

However, all the ports on the other switch (with the exception of one port going to a voice gateway) are trunked. Yet the only ports on either switch reporting giants are the two 10Gb ports connecting to the two switches.

If the giants are a result of the dot1q tag shouldn't I be seeing it on the other trunked ports?

BTW,

formatting link
suggest that giants might be because of a bad NIC:

"Frames received that exceed the maximum IEEE 802.3 frame size (1518 bytes for non-jumbo Ethernet) and have a bad Frame Check Sequence (FCS)."

Anyway, I tried setting the MTU for the two interfaces to 9216 but I'm still getting giants :-(

Reply to
DC

I think you have found the problem. Searching cisco.com, I've found the same thing:

"Jumbo frames are not defined as part of the IEEE Ethernet standard and are vendor-dependent. They can be defined as any frame bigger than the standard ethernet frame of 1518 bytes (which includes the L2 header and Cyclic Redundancy Check (CRC)). Jumbos have larger frame sizes, typically

Giant frames are defined as any frame over the maximum size of an ethernet frame (larger than 1518 bytes) that has a bad FCS.

Baby Giant frames are just slightly larger than the maximum size of an ethernet frame. Typically this means frames up to 1600 bytes in size."

I think you should put in a sniffer and find the bad NIC.

Doan

Reply to
Doan

DC said the following on 15/03/2008 11:08 PM:

Regarding the above problem with giants being reported on a WS-X6704-10GE module on a Cat6500 running Version 12.2(17a)SX1. I found the following Cisco bug report:

CSCef87392 Bug Details Giants incorrectly counted on trunk with 67xx modules A Catalyst 6500 may increment giants on 67xx cards for ports that are trunking. This does not affect the performance of the switch, and is purely cosmetic.

Workaround:

None

12.2(17a)SX1 isn't listed as one of the known affected versions, however, a number of 12.2(18) releases are. So, I'm assuming that the above bug applies to my switch as well and that it's nothing to worry about.
Reply to
DC

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.