Data center cabling reaches beyond the LAN

Feb. 1, 2013
The term "convergence" gets used so often with respect to networking and cabling systems it may have become a mere buzzword whose meaning and importance are getting lost in the hype.

From the February, 2013 Issue of Cabling Installation & Maintenance Magazine

Storage area network and high-performance computing applications require data rates that call for top-performing cabling.

by PATRICK McLAUGHLIN

The term "convergence" gets used so often with respect to networking and cabling systems it may have become a mere buzzword whose meaning and importance are getting lost in the hype. If that is the case, then when convergence truly occurs in a network scheme, and a specific cabling type provides sufficient physical-layer support for converged applications, it is worth pointing out and emphasizing.

Such may be precisely the situation with a data center's "other" connection and cabling technologies--specifically, those that support storage area networks (SANs) and/or high-performance computing (HPC) applications as opposed to local area networks (LANs). SAN and HPC applications are different from one another but in many cases, they share the optical thread of relying on a fiber-based physical layer infrastructure.

SAN cabling

Where Ethernet is the dominant protocol serving LANs in data center as well as enterprise environments, other protocols serve data center SAN and HPC systems. In SANs, Fibre Channel is the traditional standard bearer of connectivity. Fibre Channel connections frequently are dispatched in point-to-point setups rather than tiered, structured cabling approaches as we have come to expect in LANs. And the evolution of Fibre Channel systems has not followed the 10x-transmission-speed-increase path that Ethernet has (or at least, that Ethernet did until the development of 40G). Regardless, Fibre Channel transmission speeds have doubled with success generations, from 1 to 2 to 4 to 8 to 16 Gbits/sec. Fibre Channel is like Ethernet in that its transmission scheme includes overhead, so the actual throughput in Mbits or Gbits/sec does not equal the Gbaud line rate.

Convergence comes into play when we consider Fibre Channel over Ethernet (FCoE), a transmission scheme in which Fibre Channel signals are encapsulated within and delivered via Ethernet packets. FCoE has been heralded as a "unified fabric" that weaves together the data center's SAN and LAN. FCoE has entered the market and been deployed by some. Industry analysts and observers generally believe Fibre Channel as we know it will not be completely overtaken by FCoE, and as the saying goes the market ultimately will decide which protocol secures how much market share. But the capability to support such high-data-rate Fibre Channel paths themselves, and/or the ability to support the encapsulation of Fibre Channel via FCoE, calls for a physical layer cabling system of the utmost performance. In practical terms, that means systems based on Om3 and Om4 multimode optical fiber.

High-performance computing

HPC systems operate differently and use different technologies to accomplish their objectives. But they also present the underlying need for robust optical infrastructure. The connection style frequently used in HPC is InfiniBand, which is not a networking technology but rather an input/output (I/O) technology. Furthermore, its evolution has not followed a predictable 10x-increase nor a 2x-increase migration path as traditionally seen with Ethernet and Fibre Channel, respectively.

High-speed InfiniBand I/O connections use parallel transmission paths, typically four lanes. For example 10-Gbit InfiniBand comprises four separate 2.5-Gbit paths. Likewise, 4x10-Gbit yields a 40G InfiniBand connection. The parallel nature of InfiniBand connectivity enables a type of "cascading" of speeds, whereby multiple four-lane paths of a given speed can yield a data rate that increases by the same multiple. Current leading-edge InfiniBand technology puts forth a 4x14G construction for a 56-Gbit/sec connection rate.

The IBTA

A group called the InfiniBand Trade Association (IBTA; www.infinibandta.org) provides significant direction for InfiniBand's course. One of the IBTA's most-recognized efforts is its InfiniBand Roadmap, which charts the planned speed increases for the I/O technology over time. The IBTA explains that the roadmap "was researched and developed as a collaborative effort from the various IBTA working groups. Members of IBTA working groups include leading enterprise IT vendors who are actively contributing to the advancement of InfiniBand. The roadmap details 1x, 4x, 8x and 12x EDR [Enhanced Data Rate--26 Gbits/sec] and FDR [Fourteen Data Rate--14 Gbits/sec] … with bandwidths reaching 300 Gbits/sec data rate EDR by 2013. … For vendors with a stake in the interconnect business, the roadmap provides a vendor-neutral outline for the progression of InfiniBand so that they may plan their product development accordingly. For enterprises and high-performance computing end users, the roadmap provides specific milestones around expected improvements to ensure their InfiniBand investment is protected."

The organization asserts that the InfiniBand I/O technology "changes the way data centers are built, deployed and managed," accomplishing this feat because InfiniBand "enables greater server and storage performance and design density while creating data center solutions that offer greater reliability and performance scalability. InfiniBand technology is based upon a channel-based, switched fabric, point-to-point technology."

The group also points out that the technology co-exists with, rather than fighting for market share from, networking protocols. "The InfiniBand architecture is complementary to Fibre Channel and Gigabit Ethernet," the IBTA says. "InfiniBand is uniquely positioned to become the I/O interconnect of choice for data center implementations. Networks such as Ethernet and Fibre Channel are expected to connect into the edge of the InfiniBand fabric and benefit from better access to InfiniBand architecture-enabled compute resources. This will enable IT managers to better balance I/O and processing resources within an InfiniBand fabric."

In cases of InfiniBand, users have both fiber-optic and copper-based cabling options. "In addition to a board-form-factor connection, it supports both active and passive copper up to 30 meters pending speeds, and fiber-optic cabling up to 10 km," the IBTA explains.

In mid-2012 analyst firm Taneja Group (www.tanejagroup.com) published a report, "InfiniBand's Data Center March," the title of which indicates its tone. In a blog post on the IBTA website, Taneja senior analyst Mike Matchett commented, "InfiniBand has now become the most widely used interconnect among the top 500 supercomputers (according to www.top500.org). It has taken a lot of effort to challenge the entrenched ubiquity of Ethernet, but InfiniBand has not just survived for over a decade, it has consistently delivered on an aggressive roadmap--and it has an even more competitive future."

So much for being complementary to, not competitive with, Ethernet. But then the concept of the InfiniBand-specific flavor of convergence a la FCoE came up.

Matchett continued, "The adoption of InfiniBand in a data center core environment not only supercharges network communications, but by simplifying and converging cabling and switching reduces operational risk and can even reduce overall cost. Bolstered by technologies that should ease migration concerns like RoCE [RDMA over Converged Ethernet] and virtualized protocol adapters, we expect to see InfiniBand further expand into mainstream data center architectures not only as a backend interconnect in high-end storage systems, but also as the main interconnect across the core."

The underlying truth remains that high-performing cabling systems--structured or point-to-point--are essential to support converging data center technologies. ::

Patrick McLaughlin is our chief editor.

View CIM Archived Issues

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...