FIBER vs. COPPER in Data Center

In most data center designs there is a mixture of both copper and fiber infrastructure. This paper is not suggesting that one should replace the other, rather that each should be considered carefully with respect to the applications expected to be supported over the life of the data center. With varied capabilities of networking equipment and cabling options, a thorough analysis should be performed to plan the most cost effective data center infrastructure to maximize your return on investment.

Power and Cooling Efficiencies

There are several factors driving data center specifiers and decision makers to revise, remediate, relocate or consolidate current data centers. Power and cooling are two of the more significant factors. In many legacy data centers, older-model air-handling units operate at roughly 80% efficiency at best, measured in terms of electrical use per ton of cooling (kW/ton). Newer units operate at between 95-98% efficiency depending on the manufacturer and model. In some instances, it is more cost effective for companies to write off unrealized depreciation in order to receive the efficiency benefits of the newer units.

But with any cooling equipment, conditions apart from the cooling unit itself can have a significant impact on efficiency. Simple steps like removing abandoned cable from pathways to reduce air dams and maximize air flow, installing brush guards or air pillows to maintain static pressure under the floor, and redressing cabling within cabinets to lessen impedance of front to back airflow, are all beneficial and are forcing companies to look at these and other relatively simple upgrades for improving power and cooling efficiencies. With green/ ecological and power reduction initiatives swaying today's decisions, the circular relationship between power consumption and cooling is bringing facilities back into the discussions for selecting network equipment (e.g., servers, switches, SANs).

Increasing Storage and Bandwidth Trends

In addition to requirements for faster processing and lower power consumption, recent changes in legislation and mandates for data retention (Sarbanes Oxley for example) are driving storage costs up. While these vary by industry, governance and company policy, there is no question that storage and data retrieval requirements are on the rise. According to IDC=B9, "281 exabytes of information existed in 2007, or about 45 Gb for every person on earth." As with any other equipment in the data center, the more data you have and transfer, the more bandwidth you will need. To support faster communications, there are a growing number of high-speed data transmission protocols and cabling infrastructures available, each with varying requirements for power and physical interfaces.

To meet these increasing demands for bandwidth in the data center, 10 Gb/s applications over balanced twisted-pair cabling, twinax cabling and optical fiber cabling are growing. The Dell'Oro Group, a market research firm, predicts that copper- based 10 GbE will expand to represent 42% of the projected 8.8M 10GbE units by 2010=B2. A study by the Linley Group indicated that: "...by 2009, we expect 10GbE shipments to be well in excess of one million ports. The fast-growing bladeserver market will drive the demand for 10GbE switches. At the physical layer, the 10GbE market will go through several transitions. . . including a shift to 10GBASE-T for copper wiring." =B3

10 Gb/S Infrastructure Options

There are several cabling alternatives available over which 10 Gb/s can be accomplished. Infiniband is one option. The single biggest advantage of Infiniband is that it has far lower latency (around one microsecond) than TCP/IP and Ethernet based applications, as there is much less overhead in this transmission protocol. Infiniband is gaining popularity in cluster and grid computing environments not only for storage, but as a low latency, high performance LAN interconnect with power consumption at approximately 5 Watts per port on average.

A single Infiniband lane is 2.5 Gb/s, and 4 lanes result in 10 Gb/s operations in SDR (Single Data Rate) mode and 20 Gb/s in DDR (Dual Data Rate) mode. Interfaces for Infiniband include twinax (CX4) type connectors and optical fiber connectors: even balanced twisted-pair cabling is now supported through Annex A54. The most dominant Infiniband connector today, however, utilizes twinax in either a 4x (4 lane) or 12x (12 lane) serial communication. These applications are limited to 3-15 m depending on manufacturer, which may be a limiting factor in some data centers. Optical Fiber Infiniband consumes approximately 1 Watt per port, but at a port cost of nearly 2x that of balanced twisted-pair. Active cable assemblies are also available that convert copper CX4 cable to optical fiber cable and increase the distance from 3-15 m to 300 m, although this is an expensive option and creates an additional point of failure and introduces latency at each end of the cable. One drawback to the CX4 Infiniband cable is diameter which is 0.549 cm (0.216 in) for 30 AWG and 0.909 cm (0.358 in) for 24 AWG cables.

With the release of the IEEE 802.3an standard, 10 Gb/s over balanced twisted-pair cabling (10GBASE-T) is the fastest growing and is expected to be the most widely adopted 10GbE option. Because category

6A/class EA and category 7/class F or category 7A/class FA cabling offer much better attenuation and crosstalk performance than existing category 6 cabling, the standard specified Short Reach Mode for these types of cabling systems. Higher performing cabling simplifies power reduction in the PHY devices for Short Reach Mode (under 30 m). Power back off (low power mode) is an option to reduce power consumption compared to category 6 or longer lengths of class EA, class F or class FA channels. Data center links less than or equal to 30 meters can take advantage of this power savings expected to roughly 50% depending on manufacturer.

The IEEE 802.3 10GBASE-T criteria states a goal that "the 10GBASE-T PHY device is projected to meet the 3x cost versus 10x performance guidelines applied to previous advanced Ethernet standards" . This means that balanced twisted-pair compatible electronics, when they become commercially affordable, and not simply commercially available, will provide multiple speeds at a very attractive price point, relative to the cost of optical fiber compatible electronics. As maintenance is based on original equipment purchase price, not only will day-one costs be lower, but day-two costs will also be lower. Latency on first generation balanced twisted-pair compatible electronics chips is already faster than that written in the standard with latency near 2.5 microseconds.

At 1 Gb/s speeds, balanced twisted-pair compatible electronics offer better latency performance than fiber; however, considering latency at

10 Gb/s, currently fiber components perform better than balanced twisted-pair compatible 10GBASE-T electronics, but not as well as 10 Gb/s Infiniband/CX4. However, this will likely change with future generation 10GBASE-T chips for copper switches. It is important to remember that in optical transmissions, equipment needs to perform an electrical to optical conversion, which contributes to latency.

Balanced twisted-pair remains the dominant media for the majority of data center cabling links. According to a recent BSRIA press release: ". . .survey results highlight a rush to higher speeds in data centers; a broad choice of copper cabling categories for 10G, especially shielded; and a copper / fiber split of 58:42 by volume.

75% of respondents who plan to choose copper cabling for their 10G links plan for shielded cabling, relatively evenly split between categories 6, 6a and 7. OM3 has a relatively low uptake at the moment in U.S. data centers. The choice for fiber is still heavily cost related, but appears to be gaining some traction with those who want to future-proof for 100G and those not willing to wait for 10 Gb/s or 40 Gb/s copper connectivity and equipment." 5

Optical fiber-based 10Gb/s applications are the most mature 10GbE option, although designed originally for backbone applications and as an aggregation for gigabit links. Fiber's longer reach makes the additional cost of fiber electronics worthwhile when serving backbone links longer than 90 meters. But using optical fiber for shorter data center cabling links can be cost prohibitive.

Mixing both balanced twisted-pair cabling and optical fiber cabling in the data center is common practice. The most common 10 GbE optical fiber transmission in use in the data center is 10GBASE-SR. This will support varied distances based on the type of optical fiber cabling installed. For the OM1 optical fiber (e.g., FDDI grade 62.5/125=B5m multimode fiber), distance is limited to 28 meters. For laser optimized OM3 grade 50/125=B5m (500/2000) multimode fiber, the distance jumps to 300 m with future proof support for 40 and 100 Gb/s currently under development within IEEE. In order to increase the distances on OM1 grade optical fiber, two other optical fiber standards have published. 10GBASELX4 and 10GBASE-LRM increase allowable distances to

300 m, and 220 m respectively. However it is important to note that LX4 and LRM electronics are more expensive than their SR counterparts, and in most cases, it is less expensive to upgrade your optical fiber cabling to laser optimized (OM3) grade optical fiber as a cabling upgrade would not result in elevated maintenance costs due to the higher cost of the electronics.

10 Gb/S Infrastructure Options Progression from 1Gb/S to 10 Gb/S

In many cases for both optical fiber and balanced twisted-pair cabling, an upgrade from 1 Gb/s to 10 Gb/s will require a change of the Ethernet switch, as older switch fabrics will not support multiple

10 Gb/s ports. Prior to selecting balanced twisted-pair or optical fiber for an upgrade to 10 GbE, a study should be completed to ensure that power, cooling, and available space for cabling is adequate. This analysis should also include day one and day two operating and maintenance costs.

Power consumption for 10 Gb/s switches is currently a major factor in the cost analysis of balanced twisted-pair vs. optical fiber cabling in the data center. With first generation 10GBASE-T chips operating at

10-17 Watts per port, lower power consumption is a goal and a challenge for 10GBASE-T PHY manufacturers. This is certainly something to watch as next generation chips are expected to have much lower power demands on par with Infiniband ports or roughly one half of the first iterations. The same was seen in gigabit Ethernet, which from first generation chips to current technologies, saw a 94% decrease in power from 6 Watts per port to the 0.4 Watts per port figure we see today. Supporting this is the recent release of a 5.5 W per port 10GBASE-T chip from Aquantia6.

It is further noted that IEEE is working on Energy Efficient Ethernet (802.3az) technology that will allow links to autonegotiate down to lower speeds during periods of inactivity - a capability which could reduce power by an estimated 85% when negotiating from 10 Gb/s to 1 Gb/ s, and even further for lower speeds. Average power per 24-hour period will be far less when Energy Efficient Ethernet is built into future generation 10GBASE-T chips. This potential power savings is not available for optical fiber as there is no ability to autonegotiate over optical fiber.

Since optical fiber electronics cannot autonegotiate, a move from

1000BASE-xx to 10GBASE-xx requires a hardware change. In contrast, both 1GbE and 10GbE can be supported by 10GBASE-T balanced twisted- pair compatible equipment. Hardware changes cause downtime and a shortened lifecycle of the network hardware investment. There are several options for optical fiber communications at 10GbE. Each is characterized by range, wavelength and type of optical fiber media. The following table shows an estimated end-to-end cost comparison between various balanced twisted-pair and optical fiber data center applications including estimated 3 year maintenance contract costs.

formatting link

Ergyn Sadiku

Reply to
Ergyn Sadiku
Loading thread data ...

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.