Anyone used Iperf or Netperf w/GigE?

Hi,

I was wondering if anyone has used Iperf or Netperf for testing network performance over GigE? The reason for my question is that I've been doing some testing, initially with Iperf, and recently with Netperf, of GigE LAN links, and I've been finding results in the 300Mbit/sec range. The server vendor is implying that these results are not valid, and is suggesting that I do a file copy of a 36GB file instead and time it, subtracting the time for a local file copy. I don't mind doing the test they're suggesting, but I'm just wondering if there's a possibility that the numbers that I'm getting from both Iperf and Netperf are really 'off'?

Thanks, Jim

Reply to
ohaya
Loading thread data ...

Ya. Two Dell Poweredge 2650 servers connected to a Nortel Baystack 5510 will run 996Mb/s all day long, jumbo frames enabled. Servers were running RedHat Enterprise. Needless to say, we use Iperf for performance tuning and testing all the time. The multicast and udp support is great for QoS testing.

-mike

Reply to
Michael Roberts

Yes :)

I suspect they are not off, but they may be using TCP settings that are not optimal for GigE. For example, what are you using for -s and

-S as test-specific parameters in the netperf TCP_STREAM test?

Also, what sort of system are you using, the GigE card, the bus speeds and feeds all that stuff.

rick jones

Reply to
Rick Jones

As always, benchmarks measure the speed of the particular benchmark.

As regards for network performance, netperf _is_ a very good tool, allowing to compare different systems. If your system only comes to 300Mbit/sec then your system is limited to that speed. A vendor that is unable to tune up might react with bullshit and try to shift your focus.

If you need more speed, you might need radical changes, it could be anouther OS, or other networking gear. But remember, changing measurment tools won't give you more performance from your application!

Reply to
phn

Hi Rick,

Comments below...

With the netperf testing so far, I just used the default settings. I was assuming that this should give us at least "an idea" of what the actual throughput was?

I've been using iperf more extensively, because I couldn't find netperf until a couple of days ago.

Needless to say, I was surprised with the results I got from iperf, and then when I finally got a working netperf, those numbers came in about the same.

System under test consisted of two IBM blade servers (HS40) with 4 x Xeon 3.0 GHz CPUs, 16GB of memory, and 4 x Intel/1000 NICs onboard. Connection between the blades (for these tests with netperf) was simply a fiber cross-over cable.

Jim

Reply to
ohaya

Hi,

Sorry, forgot to mention that both systems are running Windows 2003 Server.

Jim

Reply to
ohaya

Mike,

Thanks for the info. Actually, that gives me an idea. We have some Dell PowerEdges with GigE NICs sitting around somewhere. I'll see if I can try out Iperf and/or Netperf on them and see what I get.

Jim

Reply to
ohaya

Peter,

Thanks for the advice.

I think/hope that you're aware of what I've been attempting to do, based on my earlier thread, and personally, I agree that at this point, the vendor is reacting with "b....".

Nevertheless, it looks like I'm going to have to do their "manual copy" test to satisfy them that there's a problem in the first place, even though I think that tools like Iperf and Netperf do a better job because they're specifically designed for what they do. Otherwise, so far, it doesn't look like they're going to even look into this problem.

I guess that we've all "been there, and done that" with our vendors :(...

Jim

Reply to
ohaya

Hi Rick,

I can't see any -s or -S parameters? What I'm going by is a man page at:

formatting link
Also tried a "-h" and didn't see any -s or -S there?

FYI, the binaries that I have are 2.1pl1. The

formatting link
site doesn't seem to be working anymore, so these were the only binaries I could find for Win32, on a site in Japan, I think.

Jim

P.S. Are you "the" Rick Jones, the originator of Netperf?

Reply to
ohaya

I can't find a match on the Serverworks site for the chipset that is supposed to be on that board, but one possible would be the "HE" chipset, which has onboard 32/33 PCI and an IMB link that allows connection of a

64/66 or PCI-X southbridge. If the Ethernet is on the 32/33 PCI that would explain the poor performance you're seeing. Just for hohos, try each Ethernet port in turn, using the same port on both blades--it may be that one or two are on 32/33 and the others are on the fast bus. I realize it's a long shot, but it's simple and obvious.

Also, are you _sure_ you've got a good cable.

And is there any possibility that there's a duplex mismatch? Did you connect the cable with both blades powered down? If not, it may be that the NICs did not handshake properly--they're _supposed_ to I know but what's supposed to happen and what does happen aren't always the same.

Reply to
J. Clarke

John,

Comments below...

Jim

I asked IBM specifically about the interface, and they said they were PCI-X. Of course, they could be wrong. Also, I've tried between combos among 4 servers already.

Re. cables, I've tried several fiber cables.

Here's the page listing the driver:

formatting link
I used the one:

"Intel-based Gigabit and 10/100 Ethernet adapter drivers for Microsoft Windows 2000 and Microsoft Windows Server 2003"

See above.

That's a good hint. For the tests via a GigE switch, the servers were connected to the switch prior to power-on (no choice). For the tests via fiber cross-over cable, I plugged the fiber together after power on.

I'll try some tests powering the servers off, connecting the cables, the powering the servers on, if I can.

Reply to
ohaya

OK, I'm a little late to this thread, but have you tried UDP? MS-Windows used to have horrible problems setting adequate TCP-Rcv-Windows sizes.

Personally, I use `ttcp` for bandwidth testing.

-- Robert

Reply to
Robert Redelmeier

Rick,

I may have been unclear by what I meant by a "manual copy" test. What they are suggesting that I do is create a 36GB file on one server, then:

- manually time a file copy from that server to the other server, and

- manually time a file copy from that server to itself, and

- subtract the times and divide the result by 36GB.

Jim

Reply to
ohaya

Hi,

There are 4 x Intel/1000 NICs on each blade, not on the chassis. OS in Windows 2003.

Jim

Reply to
ohaya

Robert,

No, I haven't tried UDP yet. Will do that when I have time.

I started this testing with TTCP (actually a version called PCATTCP), but I was getting very inconsistent test-to-test results, so I looked for another tool. Couldn't find netperf (for Win32), so I found Iperf, and did most of the testing with that.

Then, I found an older binary for netperf, and tried that to validate the results I got from Iperf.

Jim

Reply to
ohaya

"Typically" (as if there really is such a thing) one wants 64KB or larger TCP windows for local gigabit. Default netperf settings simply take the system's defaults which may not be large enough for maximizing GbE throughput.

Is that 4X Intel/1000 on each blade, or are they on the chassis? Windows or Linux? I'd check CPU util if possible - although don't put _tooo_ much faith in top.

rick jones

Reply to
Rick Jones

-s and -S are "test specific" options. Help for test-specific options is diplayed when you specify a test type and the -- -h:

$ ./netperf -t TCP_STREAM -- -h

Usage: netperf [global options] -- [test options]

TCP/UDP BSD Sockets Test Options: -C Set TCP_CORK when available -D [L][,R] Set TCP_NODELAY locally and/or remotely (TCP_*) -h Display this text -I local[,remote] Set the local/remote IP addresses for the data socket -m bytes Set the send size (TCP_STREAM, UDP_STREAM) -M bytes Set the recv size (TCP_STREAM, UDP_STREAM) -p min[,max] Set the min/max port numbers for TCP_CRR, TCP_TRR -P local[,remote] Set the local/remote port for the data socket -r req,[rsp] Set request/response sizes (TCP_RR, UDP_RR) -s send[,recv] Set local socket send/recv buffer sizes -S send[,recv] Set remote socket send/recv buffer sizes

For those options taking two parms, at least one must be specified; specifying one value without a comma will set both parms to that value, specifying a value with a leading comma will set just the second parm, a value with a trailing comma will set just the first. To set each parm to unique values, specify both and separate them with a comma.

formatting link
should be up - i'll double check it. While netperf sources are up to 2.3pl1 now, which includes some non-trivial Windows re-integration, there aren't binaries for it from netperf.org/ftp.cup.

Yes. These days I call myself the "Contributing Editor" :)

rick jones

Reply to
Rick Jones

Being affiliated with a vendor :) at least for the moment. I will say that the path through the stack may indeed be different for FTP than for netperf TCP_STREAM. For example, many FTP's can make use of the platform's "sendfile" command which will send data directly from the buffer cache down the stack without copies. There will be a data copy in a netperf TCP_STREAM test. If the system is easily CPU/memory bus limited that could make a significant difference. Of course, that is why there is a TCP_SENDFILE test in contemporary versions of netperf :) (I cannot remember if it is coded to use transmitfile on Windows or not - I think that change may not be there yet)

Of course, it still could just be smoke or simply someone going step by step through a checklist.

rick jones

Reply to
Rick Jones

I didn't realize they were running fiber. There have been cases with short cables where the receiver was being overdriven--don't know if that would produce the symptoms you're seeing though.

Reply to
J. Clarke

I think there's a version called `ttcpw`

Of course! The standard number of packets goes too quickly on Gig.

You should validate a tool against the localhost loopback interface. On my slow 500 MHz Linux box:

$ ttcp -sr & ttcp -stu -n99999 localhost [2] 5030 ttcp-r: buflen=8192, nbuf=2048, align=16384/0, port=5001 tcp ttcp-r: socket ttcp-t: buflen=8192, nbuf=99999, align=16384/0, port=5001 udp -> localhost ttcp-t: socket ttcp-t: 819191808 bytes in 4.05 real seconds = 197396.61 KB/sec +++ ttcp-t: 100005 I/O calls, msec/call = 0.04, calls/sec = 24676.06 ttcp-t: 0.1user 3.9sys 0:04real 100% 0i+0d 0maxrss 0+2pf 0+0csw

This is barely faster than Gig. My Athlon XP 2000+ will report 1+ GByte/sec

-- Robert

Reply to
Robert Redelmeier

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.