120 Hosts Running GigE at Wire Speed Minimum Cost

What is the minimum cost solution for running 120 hosts at wire speed on GigE? I am thinking that something like two used Foundry or Extreme switches would do this at lowest cost.

Reply to
Will
Loading thread data ...

What are you going to _do_ with 50 terabytes per hour?

Reply to
William P.N. Smith

You first cost may be buying new hosts. A good desktop PC can't fill a GbE pipe. Or so I'm told.

Reply to
Al Dykes

I can get 913 megabits per second between two P4-2.4GHz machines with integrated network controllers on their 800MHz FSB, but that's with ttcpw, which isn't very interesting...

Still, the original question doesn't make any sense, as doing something useful with 50 terabytes per hour is going to suck up a lot more horsepower and disk speed than any "desktop" machine can deal with in this decade.

Reply to
William P.N. Smith

To be fair, the OP didn't say "desktop", he didn;t say anything. Maybe he's got a Linux cluster, which can use lots of bandwidth talking to itself. From what I hear network latency is a really big issue for workload performance and the brand (and cost) of the switch is very important.

Reply to
Al Dykes

True, we're getting off the original subject. I doubt there's a machine in existance that'll do "wire speed" and do anything useful with it, though, so now we're left wondering how far off "wire speed" we can be and still meet the OP's requirements. My 913 megabits was regular desktop machines talking thru two D-Link DGS-1005D switches, but the OP wants 120 machines. If there are no other criteria and this is a homework assignment, then 60 of those at $60 each will satisfy the criteria. Of course, in that case you don't even need the switches, so just cabling the machines together will work... 8*}

Reply to
William P.N. Smith

Many can, but not with conventional "off the shelf" applications. Disk I/O is usually a major factor, unless you're just beaming data to/from RAM for fun.

Reply to
Randy Howard

Attempt to keep up with the Windows viruses ;-)

Reply to
James Knott

The end user is building a major animation film. Each of 120 workstations brings a 100GB file to its local file system, processes it for whatever reason, and then uploads it back to a common server.

Rather than methodically isolate every bottleneck in the application, I would like to focus this conversation on one of the many bottlenecks, and that is the network itself. Personally I think the biggest bottleneck is disk I/O on the server, but that's a different thread. I just want to make sure that the network itself doesn't become a bottleneck.

Reply to
Will

FastIron II doesn't support 120 optical ports, so the backplane speed isn't all that interesting. Sure you could have a tree of switches, but in this case the 120 hosts happen to all be in racks in the same room, and that's why I thought an Extreme BlackDiamond or Foundry BigIron might give plenty of horsepower at neglible cost (assuming you buy used).

Reply to
Will

RAM-to-RAM is a big application for compute clusters.

AFAIK, most desktops cannot get GbE wirespeed, unless their controller is on something faster than a PCI bus. The usual limit there is around 300 Mbit/s, mostly caused by limited PCI burst length and long setup.

-- Robert

Reply to
Robert Redelmeier

In article , Will wrote: :What is the minimum cost solution for running 120 hosts at wire speed on :GigE? I am thinking that something like two used Foundry or Extreme :switches would do this at lowest cost.

Amazing coincidence that the Foundry FastIron II just -happens- to be rated for exactly 120 wire speed gigabit ports.

Somehow, in my network, we never happen to have nice round multiples of 12 -- we end up with (e.g.) 79 hosts in a wiring closet, plus a couple of uplinks.

Odd too that one would have 120 gigabit wirespeed hosts in one place and not be interested in adding a WAN connection, and not be interested in redundancy...

====== One must be careful with modular architectures, in that often the switching speed available between modules is not the same as the switching speed within the same module.

Reply to
Walter Roberson

Clearly the disk and network I/O bottlenecks at the file servers are big. But that's another thread. The only thing I'm concerned about in the current thread is how to cheaply guarantee that the network itself is not a bottleneck for the servers processing information that they bring down from the file servers.

Reply to
Will

You are assuming that there is one file server. That would be the worst possible design, right?

Regarding USENET, you are assuming that this is the only input the design process? You are assuming that no one on USENET could possibly have one even microscopically significant idea that might improve any aspect of the design? Pretty pessimistic assessment of the medium in which you are participating.... Considering that the cost is next to zero, if you get nothing you have lost nothing. And if you get even one good idea, you got the idea at an excellent cost-benefit ratio. The fact that others now benefit from the exchange, now and in the future, creates benefits for the larger audience with access to USENET.

Your point that the workstations have local disks that are slower than the network is a point well-taken. But the disks are capable of better than

10/100 100BaseT speeds, so gigE just happens to be the next step up that bypasses that particular bottleneck. And these days gigE is cheap.
Reply to
Will

So you need a server with a 120 gigabit NIC, and a server port on your switch of the same speed?

Again, if 90% is good enough, then SOHO unmanaged switches are good enough. If the network is faster than your disks, why spend any brain cycles on how many nines you can get out of your network?

You are talking millions of dollars worth of hardware, why ask this kind of question on Usenet? [FWIW, the upload-process-download thing sounds really inefficient...]

Reply to
William P.N. Smith

On 27.02.2005 01:04 Will wrote

The most interesting: what is the network interface of the common server? It has to cope with 120 Gbps, hasn't it?

Arnold

Reply to
Arnold Nipper

In that case, Force10 and Extreme would be worth a look. But if you're comfortable with Cisco hardware, you may want to look there as well. From *pure* performance standpoint, Cisco may come in 2nd or 3rd, but they have a large support infrastructure. But of course, they won't be cheap.

Reply to
Hansang Bae

In article , Will wrote: :FastIron II doesn't support 120 optical ports,

Your posting asked for the 'minimum cost solution'. Optical is not going to be the minimum cost solution if the hosts are within 100m of the server.

If you have constraints such as "optical" then you should state them upfront -- and even then you should be specific about whether, e.g., you are looking for 100 FX connectors or GBIC or SFP.

:Sure you could have a tree of switches, but in this :case the 120 hosts happen to all be in racks in the same room,

So you don't need 120 ports, you need 120 ports plus 1 per server plus enough for interconnects plus some number more for connections to the Internet (or to some other equipment used to create copies of the data to deliver it to customers); possibly plus more for backup hosts.

Reply to
Walter Roberson

In article , Will wrote: :Clearly the disk and network I/O bottlenecks at the file servers are big. :But that's another thread.

Excuse me, but that *isn't* "another thread". The process you describe involves negligable communications between the hosts. This makes a big difference in the choice of equipment.

If your setup is such that there could be N simultaneous connections to M servers, and N > M and you are asking us for a design in which "the network itself is not a bottleneck", then you have an implicit requirement that the server port must be able to operate at somewhere between (ceiling(N/M) * 1 Gbps) and (N * 1 Gpbs), depending on the traffic patterns. We have to know what that peak rate is in order to advise you on the correct switch. Current off-the-shelf technologies get you 1 Gbps interfaces on a wide range of devices, 10 Gbps XENPAK interfaces on a much lesser range of devices;

2 Gbps interfaces are also available in some models -- but if that's your spec then we need to know so that we rule out devices that can't handle that load.

But perhaps you are planning to get past 1 Gbps by using IEEE 802.3ad linking of multiple gigabit ports on the server. If that's the case, then we need to know that so that we know to constrain to 802.3ad compliant devices. For example, for several years Cisco has had it's EtherChannel / GigaEtherChannel technology out that allowed multiple channels to be bonded together, but that technology predates the 802.3ad standard. Cisco supports 802.3ad in modern IOS versions, but the cost of upgrading IOS versions on used devices with the Ompph! you need is very high -- high enough that it can end up being less expensive to buy -new- switches than "relicensing" and upgrading software on used ones. Whereas if you don't need 802.3ad, then used Cisco equipment could potentially be "relicensed" without software upgrade.

:The only thing I'm concerned about in the :current thread is how to cheaply guarantee that the network itself is not a :bottleneck for the servers processing information that they bring down from :the file servers.

If your server interfaces are going to run at only 1 Gbps, then in order to "guarantee" that the network is not the bottleneck in the circumstance that the devices really will run at "wire speed" you are going to need 120 servers -- an increase which is going to seriously skew the switch requirements.

The alternative to all of this, if you are content with your users sharing 1 Gbps to each server, is to recognize that you do not, in such a case, need to run all the ports at 1 Gbps wire speed

*simultaneously*. That makes a substantial difference in your choices!!

Your initial stated requirement of 120 hosts at gigabit wire speed implied to us that the switches had to have an (M * 2 Gbps) switching fabric per module, where M is the number of ports per switching module, *and* that the backplane fabric speed had to be at least

240 Gbps (in order to handle the worst-case scenario in which every point is communicating wire rate full duplex with a port on a different module.) That's a tough requirement to meet for the backplane -- a requirement that is very much incompatable with "minimum cost".

If, though, you requirement is really just that one device at a time must be able to run gigabit wire rate unidirecitonal with one of the servers -- that the link must have full gigabit available upon demand but the demands will be infrequent and non-overlapping -- then your backplane only has to be (S * 1 Gbps) where S is the maximum number of simultaneously active servers you need. If S is, say, 5, then the equipment you need to fill the requirement is considerably down-scale from a 240 Gbps bacplane.

If the real requirement is indeed that wire speed point to point must be available but that few such transfers will need to be done simultaneously, then you could be potentially be working with something as low end as a single Cisco 6509 [9 slot chassis] with Supervisor Engine 1A [lowest available speed] and 8 x WS-X6416-GBIC [each offering

16 GBIC ports]. The module backplane interconnect for the 1A is 8 Gbps, and the maximum forwarding rate of the modules is 32 Gbps [i.e., connections on the same module] when using the 1A, with a shared 32 Gbps bus as the backplane in this configuration. [Note: if such a configuration was satisfactory and you needed at most 6 Gbps, you could probably do much the same configuration in a single Cisco 4506 switch.]

But if you were not quite as concerned with minimum cost, then you could use a Cisco 6506 [6 slot chassis] with Supervisor 720 [fastest available for the 6500 series] and 3 x WS-X6748-SPF [each offering 48 SPF]. The 6748-SPF has a dual 20 Gbps module interconnect; in conjunction with the Supervisor 720, you can get up to 720 Gbps in some configurations. If I read the literature correctly, the base configuration would get you up to about 240 Gbps and you would add a WS-F6700-DFC3 distributed switching card to go beyond that, up to 384 Gbps per slot. The 6748-SPF supports frames up to 9216 bytes long.

If you were able to go copper instead of fibre, then you could use a Cisco 6506 with one of the 48-port 10/100/1000 modules:

- WS-X6758-GE-TX for Supervisor 1A, 2, 32, or 720 (32 Gbps shared bus)

- WS-X6548-GE-TX for Supervisor 1A or 2 (1518 bytes/frame max) (8 Gbps backplane interconnect)

- WS-X6758-TX for Supervisor 720 (9216 bytes/frame max) [speeds as noted in above paragraph]

An important point to note about the 16, 24, or 48 port gigabit Cisco interface cards is that they are all oversubscribed relative to the backplane interconnect [details about exactly how they share the bandwidth vary with the card]. That makes these cards totally unsuitable for the situation where you require that all ports -simultaneously- be capable of running 1 Gbps to arbitrary other ports, but with some judicious placement of the server connections can make them just fine for the situation where you need gigabit wire rate for any one link but do not need very many such connections simultaneously. [And if so then the Cisco 4506 with anything other the entry-point Supervisor might be a contender as well; the entry-point Supervisor is, if I recall correctly, only usable in the 3-slot chassis, the 4503.]

Reply to
Walter Roberson

"Will" top-posted:

Well, it's not inconsistent with the design details you've given us.

8*)

Not at all, there are some really clever people here, including those who helped design Ethernet. I'm more thinking along the lines of "Ask not of Usenet, for it will tell you Yes, and No, and everything in between."

True, if your time is worth nothing. 8*)

Sure, but my point is that _any_ GigE hardware will meet your criteria, and everytime I hear someone ask for "wire-speed" I know at least that they don't understand their problem. Present company excluded, of course.

Reply to
William P.N. Smith

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.