I was wondering if anyone has used Iperf or Netperf for testing network performance over GigE? The reason for my question is that I've been doing some testing, initially with Iperf, and recently with Netperf, of GigE LAN links, and I've been finding results in the 300Mbit/sec range. The server vendor is implying that these results are not valid, and is suggesting that I do a file copy of a 36GB file instead and time it, subtracting the time for a local file copy. I don't mind doing the test they're suggesting, but I'm just wondering if there's a possibility that the numbers that I'm getting from both Iperf and Netperf are really 'off'?
Ya. Two Dell Poweredge 2650 servers connected to a Nortel Baystack 5510 will run 996Mb/s all day long, jumbo frames enabled. Servers were running RedHat Enterprise. Needless to say, we use Iperf for performance tuning and testing all the time. The multicast and udp support is great for QoS testing.
As always, benchmarks measure the speed of the particular benchmark.
As regards for network performance, netperf _is_ a very good tool, allowing to compare different systems. If your system only comes to 300Mbit/sec then your system is limited to that speed. A vendor that is unable to tune up might react with bullshit and try to shift your focus.
If you need more speed, you might need radical changes, it could be anouther OS, or other networking gear. But remember, changing measurment tools won't give you more performance from your application!
With the netperf testing so far, I just used the default settings. I was assuming that this should give us at least "an idea" of what the actual throughput was?
I've been using iperf more extensively, because I couldn't find netperf until a couple of days ago.
Needless to say, I was surprised with the results I got from iperf, and then when I finally got a working netperf, those numbers came in about the same.
System under test consisted of two IBM blade servers (HS40) with 4 x Xeon 3.0 GHz CPUs, 16GB of memory, and 4 x Intel/1000 NICs onboard. Connection between the blades (for these tests with netperf) was simply a fiber cross-over cable.
Thanks for the info. Actually, that gives me an idea. We have some Dell PowerEdges with GigE NICs sitting around somewhere. I'll see if I can try out Iperf and/or Netperf on them and see what I get.
I think/hope that you're aware of what I've been attempting to do, based on my earlier thread, and personally, I agree that at this point, the vendor is reacting with "b....".
Nevertheless, it looks like I'm going to have to do their "manual copy" test to satisfy them that there's a problem in the first place, even though I think that tools like Iperf and Netperf do a better job because they're specifically designed for what they do. Otherwise, so far, it doesn't look like they're going to even look into this problem.
I guess that we've all "been there, and done that" with our vendors :(...
I can't find a match on the Serverworks site for the chipset that is supposed to be on that board, but one possible would be the "HE" chipset, which has onboard 32/33 PCI and an IMB link that allows connection of a
64/66 or PCI-X southbridge. If the Ethernet is on the 32/33 PCI that would explain the poor performance you're seeing. Just for hohos, try each Ethernet port in turn, using the same port on both blades--it may be that one or two are on 32/33 and the others are on the fast bus. I realize it's a long shot, but it's simple and obvious.
Also, are you _sure_ you've got a good cable.
And is there any possibility that there's a duplex mismatch? Did you connect the cable with both blades powered down? If not, it may be that the NICs did not handshake properly--they're _supposed_ to I know but what's supposed to happen and what does happen aren't always the same.
I asked IBM specifically about the interface, and they said they were PCI-X. Of course, they could be wrong. Also, I've tried between combos among 4 servers already.
Re. cables, I've tried several fiber cables.
Here's the page listing the driver:
formatting link
I used the one:
"Intel-based Gigabit and 10/100 Ethernet adapter drivers for Microsoft Windows 2000 and Microsoft Windows Server 2003"
See above.
That's a good hint. For the tests via a GigE switch, the servers were connected to the switch prior to power-on (no choice). For the tests via fiber cross-over cable, I plugged the fiber together after power on.
I'll try some tests powering the servers off, connecting the cables, the powering the servers on, if I can.
No, I haven't tried UDP yet. Will do that when I have time.
I started this testing with TTCP (actually a version called PCATTCP), but I was getting very inconsistent test-to-test results, so I looked for another tool. Couldn't find netperf (for Win32), so I found Iperf, and did most of the testing with that.
Then, I found an older binary for netperf, and tried that to validate the results I got from Iperf.
"Typically" (as if there really is such a thing) one wants 64KB or larger TCP windows for local gigabit. Default netperf settings simply take the system's defaults which may not be large enough for maximizing GbE throughput.
Is that 4X Intel/1000 on each blade, or are they on the chassis? Windows or Linux? I'd check CPU util if possible - although don't put _tooo_ much faith in top.
-s and -S are "test specific" options. Help for test-specific options is diplayed when you specify a test type and the -- -h:
$ ./netperf -t TCP_STREAM -- -h
Usage: netperf [global options] -- [test options]
TCP/UDP BSD Sockets Test Options: -C Set TCP_CORK when available -D [L][,R] Set TCP_NODELAY locally and/or remotely (TCP_*) -h Display this text -I local[,remote] Set the local/remote IP addresses for the data socket -m bytes Set the send size (TCP_STREAM, UDP_STREAM) -M bytes Set the recv size (TCP_STREAM, UDP_STREAM) -p min[,max] Set the min/max port numbers for TCP_CRR, TCP_TRR -P local[,remote] Set the local/remote port for the data socket -r req,[rsp] Set request/response sizes (TCP_RR, UDP_RR) -s send[,recv] Set local socket send/recv buffer sizes -S send[,recv] Set remote socket send/recv buffer sizes
For those options taking two parms, at least one must be specified; specifying one value without a comma will set both parms to that value, specifying a value with a leading comma will set just the second parm, a value with a trailing comma will set just the first. To set each parm to unique values, specify both and separate them with a comma.
formatting link
should be up - i'll double check it. While netperf sources are up to 2.3pl1 now, which includes some non-trivial Windows re-integration, there aren't binaries for it from netperf.org/ftp.cup.
Yes. These days I call myself the "Contributing Editor" :)
Being affiliated with a vendor :) at least for the moment. I will say that the path through the stack may indeed be different for FTP than for netperf TCP_STREAM. For example, many FTP's can make use of the platform's "sendfile" command which will send data directly from the buffer cache down the stack without copies. There will be a data copy in a netperf TCP_STREAM test. If the system is easily CPU/memory bus limited that could make a significant difference. Of course, that is why there is a TCP_SENDFILE test in contemporary versions of netperf :) (I cannot remember if it is coded to use transmitfile on Windows or not - I think that change may not be there yet)
Of course, it still could just be smoke or simply someone going step by step through a checklist.
I didn't realize they were running fiber. There have been cases with short cables where the receiver was being overdriven--don't know if that would produce the symptoms you're seeing though.
Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.