determining speed issues

Hi All,

I fear I have speed issues in my network and I have started the chore of looking to see if my suspicions are correct. I am using iperf between my desktop and several servers. I have a 4507R with all gigabit interface cards as the hub of my network. Connected via a 2gb etherchannel are two stacked gig e 3750 switches onto which all of my servers connect. What I have seen so far is that when i do tests originating from my desktop to different servers, I am seeing roughly

83% of the gig e link used. Coming back though, I am consistently seeing much lower speeds, probably 40% of the link. Below are some tests I have done:

from my desktop to our exchange server:

E:\\temp>iperf.exe -c

------------------------------------------------------------ Client connecting to , TCP port 5001 TCP window size: 63.0 KByte (default)

------------------------------------------------------------ [108] local port 51286 connected with port 5001 [ ID] Interval Transfer Bandwidth [108] 0.0-10.0 sec 969 MBytes 813 Mbits/sec

from a linux server to exchange server:

nagios# iperf -c

------------------------------------------------------------ Client connecting to 192.168.20.34, TCP port 5001 TCP window size: 126.0 KByte (default)

------------------------------------------------------------ [ 3] local port 56182 connected with port 5001 [ 3] 0.0-10.0 sec 1020 MBytes 892 Mbits/sec

from the exchange server to my desktop:

C:\\Temp>iperf.exe -c -w 63k

------------------------------------------------------------ Client connecting to , TCP port 5001 TCP window size: 63.0 KByte

------------------------------------------------------------ [1908] local port 19240 connected with port 5001 [ ID] Interval Transfer Bandwidth [1908] 0.0-10.0 sec 584 MBytes 489 Mbits/sec

Am I interpreting this info correctly? If so, where should I start in trying to figure out what may be going on?

TIA,

R
Reply to
rhltechie
Loading thread data ...

it could be the PC chokes on input

but since most networks have doiminant traffic flow from servers to clients, maybe you have contending server flows thru the trunk from the server?

From other networks i would expect the Etherchannel to act as a bottleneck all on it own if you have a reasonable number of servers, but it could be worse than 2 Gbps depending on where it is plugged into the 4507.

some GigE blades in a 4500 are heavily contended internally since each blade only gets 6 Gbps to the Sup (and even then doesnt share it all that evenly).

Reply to
stephen

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.