Thank you everybody for your support, how about I show some numbers? You will soon see that there is no duplex issue being reported by the switches.
Both client and server in this test running Win2k3, SP1...I don't have a single linux box in the building :(
Client = Nivida nForce2 on-board 10/100 Server = Nivida nForce4 on-board 10/100/1000
I'll run tests aginst server twice:
1) both NIC and switchport manually set at 100/full
2) both NIC and switchport set at "auto"
Although I'ved gotten the same speed results on many different switches, I'll run this today
on the production stack of 3750's.
I'll use iperf, and run a set of tests in TCP, and another in UDP.
------------------------- Stack Info
-------------------------
3750Stack# sh ver
Cisco Internetwork Operating System Software IOS (tm) C3750 Software (C3750-I5-M), Version 12.1(19)EA1d, RELEASE SOFTWARE (fc1) Copyright (c) 1986-2004 by cisco Systems, Inc. Compiled Mon 05-Apr-04 22:06 by antonino Image text-base: 0x00003000, data-base: 0x009206D8
ROM: Bootstrap program is C3750 boot loader BOOTLDR: C3750 Boot Loader (C3750-HBOOT-M) Version 12.1(14r)EA1a, RELEASE SOFTWARE (fc1)
(...)
Switch Ports Model SW Version SW Image
------ ----- ----- ---------- ----------
- 1 28 WS-C3750G-24TS 12.1(19)EA1d C3750-I5-M 2 52 WS-C3750-48P 12.1(19)EA1d C3750-I9-M 3 52 WS-C3750-48P 12.1(19)EA1d C3750-I9-M 4 52 WS-C3750-48P 12.1(19)EA1d C3750-I9-M 5 52 WS-C3750-48P 12.1(19)EA1d C3750-I9-M 6 52 WS-C3750-48P 12.1(19)EA1d C3750-I5-M
(...)
3750Stack# sh run
(...) ! ip subnet-zero ip routing ! ip host vg 192.168.8.3 ip host rld-1760 192.168.8.2 ip host rld-3725 192.168.8.1 ip name-server 192.168.1.5 mls qos map cos-dscp 0 8 16 26 34 46 48 56 mls qos map ip-prec-dscp 0 8 16 26 34 46 48 56 mls qos ! ! spanning-tree mode pvst spanning-tree loopguard default no spanning-tree optimize bpdu transmission spanning-tree extend system-id ! (...) ! interface GigabitEthernet1/0/24 description ------------ SERVER PORT ----- switchport access vlan 105 switchport mode access no ip address no mdix auto storm-control broadcast level 2.00 storm-control multicast level 2.00 ! (...) ! interface FastEthernet5/0/24 description ------------ CLIENT PORT ----- switchport access vlan 100 switchport mode access switchport voice vlan 200 no ip address no mdix auto storm-control broadcast level 2.00 storm-control multicast level 2.00 spanning-tree portfast spanning-tree bpduguard enable ! (...) ! interface Vlan1 no ip address shutdown ! interface Vlan100 ip address 192.168.0.1 255.255.248.0 ip helper-address 192.168.1.5 ! interface Vlan105 ip address 10.30.5.1 255.255.255.0 ip helper-address 192.168.1.5 ! interface Vlan200 ip address 192.168.8.4 255.255.248.0 ip helper-address 192.168.1.5 ! (...)
----------------- Stack Notes
-----------------
There are VoIP phones in use. The stack does inter-vlan rounting:
3750Stack# sh ip route
(...) C 10.30.5.0 is directly connected, Vlan105 S* 0.0.0.0/0 [1/0] via 192.168.0.2 C 192.168.8.0/21 is directly connected, Vlan200 C 192.168.0.0/21 is directly connected, Vlan100 (...)
Client IP address = 192.168.1.6 Server IP Address = 10.30.5.5
------------------ iperf, 100/full 100/full
------------------
The results of this test will surprize no one:
C:\\>iperf -f m -B 192.168.1.6 -c 10.30.5.5 -r
------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.1.6 TCP window size: 0.01 MByte (default)
------------------------------------------------------------
------------------------------------------------------------ Client connecting to 10.30.5.5, TCP port 5001 Binding to local address 192.168.1.6 TCP window size: 0.01 MByte (default)
------------------------------------------------------------ [1192] local 192.168.1.6 port 3523 connected with 10.30.5.5 port 5001 [ ID] Interval Transfer Bandwidth [1192] 0.0-10.0 sec 96.8 MBytes 81.1 Mbits/sec [1948] local 192.168.1.6 port 5001 connected with 10.30.5.5 port 6025 [ ID] Interval Transfer Bandwidth [1948] 0.0-10.0 sec 98.2 MBytes 82.4 Mbits/sec
See, TCP is about 80Mbit/sec, and symetrical Looks good. Let's try UDP:
C:>iperf -f m -B 192.168.1.6 -c 10.30.5.5 -r -u
------------------------------------------------------------ Server listening on UDP port 5001 Binding to local address 192.168.1.6 Receiving 1470 byte datagrams UDP buffer size: 0.01 MByte (default)
------------------------------------------------------------
------------------------------------------------------------ Client connecting to 10.30.5.5, UDP port 5001 Binding to local address 192.168.1.6 Sending 1470 byte datagrams UDP buffer size: 0.01 MByte (default)
------------------------------------------------------------ [1192] local 192.168.1.6 port 3601 connected with 10.30.5.5 port 5001 [ ID] Interval Transfer Bandwidth [1192] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec [1192] Server Report: [1192] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.000 ms 0/ 893 (0%) [1192] Sent 893 datagrams [1244] local 192.168.1.6 port 5001 connected with 10.30.5.5 port 6258 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1244] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.977 ms 0/ 894 (0%)
Whoa! Only 1.05Mbit/sec for UDP? (This is still 100/full
100/full) Maybe I just don't know enough about UDP and/or iperf? (First time I'ved used iperf was today, up till now I had been using robocopy.)
----------------------------- Let's now do the (server) GigabitEthernet (client) FastEthernet tests.
-------------------------
But first, clear switch counter.
3750Stack# clear counters gi1/0/24
3750Stack# clear counters fa5/0/24
...uh-oh. running iperf as client from "client" results in connection timeout. (not consection refused, as happens when I forget to start iperf as server)
...ok, I'll do iperf -s on the client instead...we are only goint to see the "slow" path.
C:\\>iperf -f m -c 192.168.1.6
------------------------------------------------------------ Client connecting to 192.168.1.6, TCP port 5001 TCP window size: 0.01 MByte (default)
------------------------------------------------------------ [1908] local 10.30.5.5 port 4608 connected with 192.168.1.6 port 5001 [ ID] Interval Transfer Bandwidth [1908] 0.0-10.1 sec 1.74 MBytes 1.45 Mbits/sec
C:\\>
Yup, slow as ethernet over carrier pigeon. Now, lets see some interface counters:
3750Stack#sh interfaces gi1/0/24 GigabitEthernet1/0/24 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is 0011.bb99.2598 (bia
0011.bb99.2598) MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 1000Mb/s, media type is RJ45 output flow-control is off, input flow-control is off ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output 00:00:01, output hang never Last clearing of "show interface" counters 00:05:31 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 2126 packets input, 2261304 bytes, 0 no buffer Received 11 broadcasts (0 multicast) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 0 multicast, 0 pause input 0 input packets with dribble condition detected 1690 packets output, 300868 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 PAUSE output 0 output buffer failures, 0 output buffers swapped out
3750Stack#
3750Stack#sh interfaces fa5/0/24 FastEthernet5/0/24 is up, line protocol is up (connected) Hardware is Fast Ethernet, address is 0011.20a7.6b1a (bia
0011.20a7.6b1a) MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s input flow-control is off, output flow-control is off ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:21, output 00:00:57, output hang never Last clearing of "show interface" counters 00:17:29 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 1000 bits/sec, 1 packets/sec 5 minute output rate 48000 bits/sec, 14 packets/sec 3232 packets input, 437440 bytes, 0 no buffer Received 31 broadcasts (0 multicast) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 18 multicast, 0 pause input 0 input packets with dribble condition detected 13545 packets output, 7019546 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 PAUSE output 0 output buffer failures, 0 output buffers swapped out
I'm no CCNA....but the above tells me there is no duplex issue. Or droped packets?!? In fact, I'm starting to think the collisions I had been seeing were generated by some other device. (I found out yesterday there are some old printers around that operate at 10/half.)
I don't understand WHY I can't get iperf to run in this, the test I wanted you all to see.
I'll skip the UDP test this time, you can already see how bad the situation is already with the server transmitting tcp to the client at a rate of 1.45Mbits/sec
I want to demo asymitry...shame iperf cound not connect both ways...Let me flip back to robocopy:
------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows :: Version XP010
-------------------------------------------------------------------------------
Started : Tue May 09 13:11:59 2006
Source : \\Foo Dest : \\Foo
Files : *.*
Options : *.* /S /E /COPY:DAT /MOVE /R:1000000 /W:30
------------------------------------------------------------------------------
New Dir 1 \\Foo
100% New File 32.6 m BigFile.mp3
------------------------------------------------------------------------------
Total Copied Skipped Mismatch FAILED Extras Dirs : 1 1 0 0 0 0 Files : 1 1 0 0 0 0 Bytes : 32.68 m 32.68 m 0 0 0 0 Times : 0:00:04 0:00:04 0:00:00 0:00:00
Speed : 6920528 Bytes/sec. Speed : 395.995 MegaBytes/min.
Ended : Tue May 09 13:12:04 2006
------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows :: Version XP010
-------------------------------------------------------------------------------
Started : Tue May 09 13:12:04 2006
Source : \\Foo Dest : \\Foo
Files : *.*
Options : *.* /S /E /COPY:DAT /MOVE /R:1000000 /W:30
------------------------------------------------------------------------------
New Dir 1 \\Foo
100% New File 32.6 m BigFile.mp3
------------------------------------------------------------------------------
Total Copied Skipped Mismatch FAILED Extras Dirs : 1 1 0 0 0 0 Files : 1 1 0 0 0 0 Bytes : 32.68 m 32.68 m 0 0 0 0 Times : 0:01:19 0:01:19 0:00:00 0:00:00
Speed : 433561 Bytes/sec. Speed : 24.808 MegaBytes/min.
Ended : Tue May 09 13:13:23 2006
So while iperf could not transfer both ways, windows file system could.
Client --> server = 395.995 MegaBytes/min. Server --> Client = 24.808 MegaBytes/min.
Note that when I do robocopy between 100/fll and 100/full...robocoy gets 591 MBytes/min both directions. So, even though the FastE -->
GigE doen't suck as much as GigE -> FastE.....FastE -> FastE is giving me the best performace.
Comments?
Merv, if you are a "couple hope away" than you are further away then I. Remember, in the above there are NO ROUTED HOPS between client and server, it's all switched at L2. and there is no "packet buffering" as was pointed out to me above. I'll still be placing a L3 router between the two on Sunday and see what happens. My goal here is aviod a seperate L2 fabric dedicated to gigabit....that's seams to be what you already have, no?