What LAN speeds can I expect?

We just upgraded the office network, pulling new cat-5e cable over the suspended ceiling and into the walls. All connections use quality hardware. The gigabit switch is a LinkSys SR2024:

The computers are 1-year-old Dell desktops running a mix of Win XP and 2K. These have Intel brand gigabit NICs installed.

There is nothing between computers on the LAN except the SR2024.

When copying a test file between computers, we're seeing 10-12 megabytes/second. This isn't anywhere near what I was expecting. I realize that 1 gigabit speeds are only theoretical and reserved for only those optimized systems with the fastest busses and hard drives, but I think my numbers are a tad low.

What speeds can we expect? What is slowing down this new network?

Thanks.

Reply to
Stephanie
Loading thread data ...

First, make sure you are using a single large file, rather than a folder full of small files, to minimize the effect of per-file overhead in the filesystem. Also, make sure the file is contiguous, although a newly- created file on a HD with lots of empty space almost surely is. And, make sure the file is large enough that you can measure transfer time with a watch (and then hand-calculate the STR, or Sustained Transfer Rate); some apps are kinda sloppy about reporting STR.

Next, do your testing with only one OS, either XP or W2K; I've seen great differences between mixed and pure OS results. {If achieving maximum STR is really important, you should plan on replacing all W2K boxes with XP boxes, and adding RAM. Did I mention Vista? Not.}

Next, note that there are four cases, and potentially four different STRs: A pushing a file to B, A pulling a file from B, B pushing to A, and B pulling a file from A.

Next, repeat the measurements several times, and use the highest STR as the basis for comparison when you try something different.

Next, eliminate the switch by connecting a pair of your fastest XP PCs with a crossover cable in place of the switch. Repeat the STR measurements to see the "insertion loss" caused by the switch. I have no clue about that particular switch, but it has to buffer at least a part of every incoming packet to decide how to handle that packet, and that delays packet delivery, even without congestion effects.

Finally, experiment with tuning the network drivers. Most importantly, pick a size for jumbo frames and use the same size for your entire LAN. Increasing the number of Xmt and Rcv transmit descriptors might help, but they chew up RAM on the PCs. I suspect that you want both Xmt and Rcv flow control turned on, but you might experiment with that.

Good luck. And please report back what you find and what you achieve.

Reply to
Bob Willard

In comp.dcom.lans.ethernet Stephanie wrote in part:

In PCI or PCI-e slots? PCI (no e) bursts at 133 MB/s but has short burst and high setup overhead. 30-40 MB/s is common.

These sound like 100baseTX speeds. Are you sure your cabling is good? (ie not homemade) Do not use files for transfer, the disk controller also sits on the PCI bus! Try `ttcp` or other memory-based source/sink.

-- Robert

Reply to
Robert Redelmeier

Stephanie hath wroth:

Your question has nothing to do with wireless. Why post to a wireless newsgroup?

Are you using Vista? The current version is known to have a file copy performance problem. One of many solutions:

while waiting for Vista SP1.

For proper benchmarking, go thee unto:

and docs at:

Download IPerf 1.7.0. Designate one machine as the "server" an run: iperf -s on it. Note it's IP address. You'll need it.

The other machines will be designated "clients". On them, run: iperf -c ip_address_of_server

I'm not at my palatial office at this time and do not have a gigabit LAN handy to give you my numbers. As I recall, I was getting about

40Mbits/sec between Linux and XP boxes, with no optimization or tweaking, through a Linksys SD2005 switch. To do better, I had to tweak IP parameters per:

You're not the first person to blunder into the optimization problem. See this review of a gigabit ethernet switch, where the author didn't optimize the test machines and got similar results to yours:

I found some results for just the gigabit ethernet cards and Iperf at:

Note the differences in performance with jumbo packets and a large MTU.

The first test you should make is with just the two machines, using a crossover ethernet cable. Zero switching hardware and CAT5 spaghetti in the loop. Just the two machines. If one of your machines is slothish, buggish, defectish, or pre-occupied with doing updates, this will show the problem.

Once you get reasonable numbers for just the test machine, add in the Linksys SR2024 switch and watch them drop slightly.

Reply to
Jeff Liebermann

Apologies abound. Oversight when posting to my "Networks" subscription which includes the Wireless NG. Now removed...

No, thank God. We haven't Gone There (yet...)

Jeff, thank you very much for your -- usual -- overattention to details and abundant pointers to resources heretofore unknown. These will be helpful in or verification & tweaking of the net.

S.

Reply to
Stephanie

Do you mean patch cables (between network jack & PC or switch)? No, store-bought "Gigabit certified" says the label :-)

Thank you. S.

Reply to
Stephanie

Stephanie wrote in part:

Most likely good. It really isn't that hard to do, but there are some really simple and apparently logical errors to make with DIY wiring.

But if the jacks or in-wall wiring are old (not at least Cat5) then gig will silently fallback to 100. Also possible for autonegotiation incompatibilities.

I'd really like to see numbers above 12.5 (say 15 MB/s) to say you've actually got gig running on that link. If you must do a file, try one 50-500 MB from defraged machines and report the _second_ run (when the outgoing file should be in OS buffers).

-- Robert

Reply to
Robert Redelmeier

PCI "1X" (32-bit 33 MHz), but PCI can be rather faster. And then there is PCI-X.

My flow chart of things to consider:

1) make sure the links are indeed coming-up at gigabit speeds 2) check for lost packets and retransmissions (TCP and link-level stats) 3) run something that does not include filesystem overhead - eg netperf TCP_STREAM -
formatting link
if netperf TCP_STREAM is no faster:

Keep in mind that in broad handwaving terms, it takes just as many CPU cycles to send a given quantity of data over a gigabit ethernet network as it does over a 100BT network. If, on your 100BT setup you only had 20% idle CPU available, you aren't going to go all that much faster when you switch to Gigabit.

5) check the CPU utilization of _each_ CPU in the system(s) involved in the transfer - if any one of them is > oh 90% you probably have a CPU bottleneck.

Others may have already meantioned that there can be specific features of a gigabit ethernet _card_ that might allow it to transfer data with fewer CPU cycles - JumboFrames is one such feature, as is ChecKsum Offload (CKO). JumboFrames have to be enabled on both ends of the connection and on _everything_ in between (eg the switches) or it won't do any good (or even work perhaps).

rick jones

Reply to
Rick Jones

Couldn't Stephanie just open Windows Task Manager, select the networking tab and see what speed she's connected at?

Reply to
Marris

Marris wrote in part:

Perhaps. I don't use MS-Windows when I can avoid it.

IIRC TaskMan.exe in MS-WindowsXP and later have a networking tab, but it is an adaptive display and I have no idea if it shows link protocol (MII info). But if it shows any peaks above 12.5 MB/s, she probably has gigE.

-- Robert

Reply to
Robert Redelmeier

On Tue, 4 Mar 2008 23:15:40 -0600, Stephanie wrote (in article ):

It also depends upon the hard drive(s) involved on each end, there are not a lot of desktop hard disk setups capable of saturating a gig-E connection one way, much less in both directions.

If the local hard drive can't be read from or written to at 125MB/s, then you can't expect it to pump data over the network that quickly either. Most desktop drives fall far short of that. Next stack on variables like file system cluster size, overhead of virus scanning software, firewalls, etc.

File copying might have been useful for testing network throughput back when networks were slower than local storage I/O, but it's not with GigE in general. You would typically need a pretty good RAID setup with a fair number of spindles to get there.

Your numbers didn't measure the network, but rather the slowest link in the chain, which was likely the write capability of the drive receiving the copy of the file.

There may be nothing slowing down the network. Do a network throughput test using a piece of software designed for doing that, that sends/receives data (usually memorymemory with no disk involved directly) instead. Ideally, you should be able to get 125MB/s in either direction, and quite close to double that when testing bidirectionally.

Reply to
Randy Howard

someone else mentioned this sounds like 100 Mbps saturation.

1000 Base-T requires all 4 pairs are wired correctly

if you buy in patch leads with only 2 pairs, or only punch down 2 pairs on the fixed wiring, then the interfaces will negotiate down to 100 Base-Tx.

if all 4 pairs are there, they need to be consistently wired so the same pair is connected to the correct pins.

a good cable tester is needed to really check wiring conforms to spec and is wired up correctly - and in practice "quality hardware" doesnt mean as much as "quality installation".

finally - all the official certification schemes for wiring require that it is tested properly after install......

run some test software that just transfer packet data - that way you find out if a disk somewhere is the limiting factor...

Reply to
stephen

Jeff Liebermann hath wroth:

(...)

I ended up doing a weekend service call (grrrr...) and took a few minutes to benchmark a gigabit ethernet system. The client and server are both Dell Optiplex 755 machines (2.6GHz Core2Duo, 1333Mhz FSB, 2GB DDR2, SATA-II HD, etc). Both running XP SP2 with the lastest band-aids. Very nice fast machines (I want one). The switch is a bottom of the line DLink DGS-2205 5 port swtich.

On the server, I ran: | iperf -s -M 100000 -w 64K -l 24K or: Packet size = 1514 bytes Window size = 64KBytes Buffer size = 24Kbytes

On the client, I ran: | C:\\>iperf -c 192.168.1.100 -M 100000 -w 64K -l 24K | ------------------------------------------------------------ | Client connecting to 192.168.1.100, TCP port 5001 | TCP window size: 64.0 KByte | ------------------------------------------------------------ | [1908] local 192.168.1.101 port 2362 connected with 192.168.1.100 port 5001 | [ ID] Interval Transfer Bandwidth | [1908] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec | [1908] 0.0-10.0 sec 1.06 GBytes 905 Mbits/sec | [1908] 0.0-10.0 sec 1.08 GBytes 929 Mbits/sec

This is almost wire speed and about as good as TCP gets. UDP would be slightly faster.

I then ran it again using the default iperf parameters: | C:\\>iperf -c 192.168.1.100 | ------------------------------------------------------------ | Client connecting to 192.168.1.100, TCP port 5001 | TCP window size: 8.00 KByte (default) | ------------------------------------------------------------ | [1908] local 192.168.1.101 port 2378 connected with 192.168.1.100 port 5001 | [ ID] Interval Transfer Bandwidth | [1908] 0.0-10.0 sec 639 MBytes 535 Mbits/sec

Yech. That's about half the rated speed and rather insipid. Big buffers are helpful. I didn't have time to try jumbo packets.

Reply to
Jeff Liebermann

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.