How to get 1 + 1 = 2Gbps?

Hi,

I have a Linux SAMBA server and a Windows XP client. Both have dual gigE NICs which I want to bundle.

What type of gigabit ethernet switch do I need to get a 2Gbps transfer rate between the server and the client?

Is it enough that the switch simply supports the 802.1ad LACP rotocol?

I tried googling but ended up confused between the various terms: port teaming, bonding, link aggregation, 802.1ad, etherchannel, PAgP and so on.

I do not need link redundancy or fault tolerance, I need pure raw speed between the server and client.

The server runs Fedora Core 4 and the client has Intel PRO software for bundling NIC ports. Assumme also that both server and client have enough horsepower to stream 2Gbps.

My budget is around USD1000 for the switch. (10G unfortunately costs upwards of USD7000 at this time, SMC have a nice 8-port 10G switch on their website)

Some promising gigE switches appear to be:

D-Link DGS-1216T 16-ports smart L2 SMC SMCGS16-smart 16-ports smart L2 (silent: has no fan) SMC SMC8612T 12-port managed L2 Netgear GS716T 16-ports smart L2 Netgear GSM7212 12-ports managed L2 DELL powerconnect 2716 16-ports smart L2 (silent: has no fan? not sure) DELL powerconnect 5324 24-ports managed L2

All above switches support IEEE 802.1ad LACP according to their respective manufactorer websites.

Also is a "smart" (i.e. web-browser management innterface with some simple functions) switch adequate for what I want to do, or do I really need a true "managed" switch?

Any comments regarding above (or comparable) switches also welcome, I didn't find any decent reviews so far.

TIA Jeff

Reply to
jeff000069
Loading thread data ...

A small correction: In my original post I refer to 802.1ad, I meant of course 802.3ad link aggregation control protocol.

Sorry about that.

Jeff

Reply to
jeff000069

EtherChannel and PAgP are Cisco-proprietary technology. If all you need to do is establish trunks and don't care about detecting misconfiguration of trunks and so on, you don't even need LACP. All you need to be able to do on the switch is create a static trunk with the ports in the aggregation being assigned to them though manual configuration. Any switch that does LACP should be able to support the creation of manual trunks, but you might want to double-check on that.

This is where things get a bit sticky. In order to maintain packet ordering within a "flow" so that you don't compromise TCP performance, switches are required to put all packets of the same "flow" on the same link within the trunk. To identify flows, they may use any of the following, including combinations of them - MAC SA, MAC DA, IP SA, IP DA, TCP/UDP Src Port, TCP/UDP Dst Port. So, for example, if your switch does this based only on the MAC SA and/or MAC DA, or even the IP SA and/or IP DA, you wouldn't get any load balancing on the hop from the switch to the client, or the switch to the server, even though the client/server may be intelligent enough to do load balancing some other way. In general, most switches will get far less than (n x linkspeed) on a trunk with n links.

Basically, you need to look carefully at how the switches you're considering do load balancing and what kind of traffic is being sent between the client and server. If you had a single 2G TCP flow, I don't know of any switch that will be able to do any load balancing on that flow.

Anoop

Reply to
anoop

Maybe I am missing something, but if you only have two machines why not wire them directly? I don't actually know that the bundling protocols support that but it would seem a good guess.

-- glen

Reply to
glen herrmannsfeldt

That seems to be a fairly large assumption. As best I recall, PC-type systems (as implied by 'XP') that could handle 1 Gbps have only been available for less than a year; you'd need something pretty well tuned to handle 2 Gbps.

I don't know anything about the capabilities of most of those models, but from my experience and the reports I have seen for one of those models I don't think you'd be able to actually get 2 Gbps out of it, even though it -claims- to be wire speed. The general reports I have heard about two of the other brands would leave me definitely suspicious about whether they would suffice. One of the brands is a plausible match.

Do not, in other words, just trust the manufacturers' literature: look for independant test results for those particular models -- and if you cannot find any independant tests, then it is safest to assume that it will -not- do what you want.

IMHO, you want "managed". If you are planning on going bleeding edge, then you are going to want to be looking hard at error rates and performance bottlenecks and so on, so you are going to want SNMP.

Reply to
Walter Roberson

Thanks to all for replying.

I have more than a single client PC, in fact I have 4 clients in total, so direct cross-over connections are not a viable solution.

I did extensive testing of my digital content server (dual Xeon 3.6GHz,

8GB memory, U320 SCSI RAID-1 10.000RPM disks) and it can surely stream out 120-150MBytes/sec. If I cache the content in memory it can do more.

My existing client P4 3.4GHz Northwood is maxed out, but I will be assembling a new client PC soon (2 x dual-cores) so yes I believe the horsepower is there both on server & client side.

I am surprised at the lack of information available, am I the first person in the world requiring this?

Anyone else?

Jeff

glen herrmannsfeldt wrote:

Reply to
jeff000069

No, I don't think you are. :) However, market forces and the corporate world have drained some of the performance excellence we used to enjoy and traded it in for basically a cost reduction and the ability to chunk work into repetitive blocks that purportedly can be done more cheaply by less skilled labor in places "foreign" to your locale. If you can't tell I'm fed up with this, I am. ;) I've called it for years "an obsession with mediocrity".

I completely understand what you want to do - high-speed transfer between two hosts for the express purpose of moving data quickly. There are a couple of obstacles, a lot of which have been put forth well by other posters to this thread - here's my take:

Network switch backplane quality varies greatly - in order to pass > 2Gbps you need a backplane that can pass more than that between two ports, and one that is called "non-blocking", which generally means each port can burst at the maximum rate and all traffic will get through. Cisco gets this right on a lot of their platforms. For a cheaper alternative, look for some of

3Com's newest stuff - cheaper, might be what you're looking for.

Second issue - host architecture. PC's stink at high-throughput applications with networking sometimes because they are not efficient handlers of hardware interrupt events in conjunction with the OS they run. The Windows TCP/IP stack plus a good NIC card generally never equals success. Solaris and Sun hardware is better under high loads. A great combination of PC and OS for network performance I've seen is Linux 2.6.x, w/the interrupt coalescing feature turned on that after a certain amount of offered load will switch from using interrupt-mode to a polled-mode (checking the NIC for data at a regularly scheduled interval) since at higher traffic rates, data is guaranteed to be on the NIC needing service. Linux 2.6 also offers the ability to control the OS scheduler, to get solidly managed CPU time to the applications that need it and allow you to tune as you wish. FreBSD just did an excellent job of auditing their TCP/IP stack for performance issues and supposedly got 10x the performance out of their stack, which I can believe!

Lastly - TCP itself. TCP is not a high-performance protocol. Yep, you read it here. There are dozens of different flavors of TCP/IP protocol makes that all use differing mathematics for queueing and congestion algorithms. When you mate two differing stacks with each other, you don't get high performance results unless you tune. A common repeating issue are the messages from poor Windows users talking to another host where FTP transfer performance crawls - often the result of things broken between the two TCP stacks, which is a conversation I could go on for hours about. ;)

So how the heck do you get to success? Perhaps:

- Buy a network switch with a good backplane architecture that's documented. It will cost a bit more, but worth it.

- Move the two or more hosts involved to the same OS platform / version, try to avoid Windows if you're looking for network performance - this isn't MS-bashing, the TCP/IP stack just never has performed in the lab for me.

- Consider doing your transfers via UDP to get the TCP performance issues out of the way, and you're guaranteed a pretty loss-less network (like on the same network switch). Old fashioned UDP-based NetBIOS might be the trick.

- Consider using and testing the use of Ethernet jumbo frames to get more data on the wire per Ethernet frame. Standards are loose here - be forewarned. :)

I just made a similar posting elsewhere about high-performance systems. If you want high-performance, all parts of the system must be examined in detail, not just the network component.

Hope this helps a little.

-DMFH

Reply to
DMFH

Samba and high perf?-) Being so very request/response oriented may present some challenges.

Have you gotten to 1Gbps transfer with your server and client(s) yet?

LACP is the de jure standard (IEEE).

In broad handwaving terms, teaming, bonding aggregation are all the same thing.

rick jones

Reply to
Rick Jones

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.