We have an 6513 Cisco router and we have some HP Proliant servers running Windows 2003 with gigabit adapters. Ports are set to 1000 on the router and the server are set to 1000/Full with flow control at auto. But when we do a windows copy from server to server we are only getting 230 MBps at the highest throughput rate. Should we be getting more throughput than this?
All depends on the server, backplane, protocol, etc. Are the source and destination on different switches? Are they in the same network? How much b/w do you have across your backbone if it is being routed or pushed across switches? How much utilization do you have across those links? If all of that checks out, how fast are your drives, what is the utilization of the server itself during that transfer? Additionally, windows copy is terrible from my experience, have you tried ftp? How does that work for you?
What do you get with those two servers on a different gigabit switch?
It's a pretty difficult question to answer, given how many variables there are. If you're just interested in the raw throughput of the switch from the servers, disregarding disk speed etc, try testing it with iperf:
formatting link
I suggest you avoid the Java version, as it tends to max out your CPU before your network. Getting 93Mbps on a switched 100Mbit network here, btw.
A SNMP monitoring tool will work. Not sure of any free ones right offhand, but I'm sure some others can recommend some. However, 200+ mbps is not bad in my opinion for windows copy, so I would try ftp first and see how you fare. I have seen etherchannel gig to servers run at 2-3 gigs per second, but for most windows based single-gig boxes, several hundred meg is probably par for the course. Not to say a well tuned box can't get to 8-900 mbps, but does take a powerful, well-tuned box. And to answer your question, a proper server should push over 90% utilization of its link, but when you get to gig or multi-gig, there are a lot more thresholds you start to hit, many of which are hardware.
Are you sure it's big "B"? Thats MegaBYTES per second. Switches are rated in bits (little b) per second. If it truely is bytes per second, you've exceeded gigabit speed (bytes X 8 = bits) 2.2mbps. Where are you getting your numbers from?
Unless there is some architectural component I am not aware of, this must be a source/destination server issue. I have 6500 series switches that consistently push between 2 and 8 gigs per second without batting an eye. As for single server bandwidth, I have seen
2-3 gigs per second for tivoli backup boxes (4 gig etherchannel to the server itself), and for single-gig connections, I have seen 8-900 megs fairly regularly. Granted these are almost all non-wintel boxes, and are usually very large IBM nodes/system complexes or another flavor of unix. As for a windows server that is tuned and has some good hardware, i have seen 3-400 meg, but have never really watched them too closely. All in all, I am guessing you have hit a threshold on your server or with whatever copy program you are using, but your performance seems within my expectations.
You are correct, it is MBytes. My initial numbers were from the network utility from HP but the later numbers are from the Iperf stats that I collected.
For maximun backup throughput you may need to adjust max tcp receive window backup software - network block size backup software - tape block size backupo software - buffer, number of blocks
Depending on your exact server model and backup hardware 1000Gbps may be approachable or not.
The switch itself is not going to be a limitation.
A KEY, let me repeat, KEY, issue is the block/window sizes versus the round trip time between the machines. The throughput is limited to
RTT * block size.
At 1 ms with a windows copy (absolute max block size of 64k). The throughput will be constrained to
64,000,000 Bytes per sec.
This is called the Bandwidth Delay product.
If you change the block size in iperf you will be able to drive the network pretty hard.
iperf -c -l 100000 x.x.x.x iperf -s -l 100000
The default is 8k which I wold guess would not usually allow a 1G network to be saturated unless maybe you had a super machine (two:).
The easiest way to fond the limit of the network hardware is to add iperf sessions
Until the aggregate throughput stops increasing. You have then eliminated individual iperf settings or behaviours as an issue. Something will truely, let me repeat TRUELY, be full up.
Even ping can be used to generate high bandwidths if you have enough of them.
c:\\> for /l %i in (1, 1, 100) do start cmd /c fping x.x.x.x -s 1400 -t 0
roughly
fping.exe from
formatting link
The software is better than the name which I can never remember.
Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.