Fibre Channel's need for speed with OM3 and OM4 optical connectivity

Nov. 20, 2017
Corning Incorporated’s Doug Coleman describes how OM3 and OM4 fiber-optic cabling enables Fibre Channel data rates up to 128 Gbits/sec.
As transmission rates increase, the capabilities of OM3 and OM4 fiber-optic systems help ensure a smooth migration.

By Doug Coleman, Corning Incorporated

Fibre Channel transport with laser-optimized 50/125-µm OM3/OM4 multimode fiber connectivity is the primary method to reliably link servers to external data storage devices in enterprise data centers. The ongoing evolution of high-performance servers and storage technologies drives the need for increased Fibre Channel data rates to reliably link these devices to maximize operating efficiencies and enable low-cost value propositions. This article will discuss server and storage technologies that warrant the higher Fibre Channel data rates in addition to the utilization of OM3/OM4 optical connectivity

The Fibre Channel Speedmap details the past, present and future of Fibre Channel. It was developed and is updated by the Fibre Channel Industry Association.

Fibre Channel—The need for speed

Fibre Channel’s deterministic data delivery, low latency and proven reliability have made it the leading transport technology for linking servers to external data storage. As servers and storage technologies have progressed over time, Fibre Channel data rates have increased in tandem to support.

Typical enterprise data centers are deploying servers today with integrated multi-core processors that range from 4 to 12 cores. Each core normally has 2 GHz of processing capability that translates into 8-24 GHz of total capability. In addition, servers are now using Peripheral Component Interconnect Express-3 (PCIe3 8G/lane) bus speeds, and PCIe4 16G/lane is fast approaching to complement the increased number of processor cores. The increased server processing necessitates higher Ethernet network data rate input/output (I/O) as well as increased Fibre Channel data rates (16 Gbit/sec Fibre Channel/32 Gbit/sec Fibre Channel) into the server host bust adapters (HBA) to access and deliver external data for the server applications. The future server trend is for an increased number of processor cores such that Ethernet 50G/100G (NIC) and 64-Gbit Fibre Channel (HBA) interconnects will be required.

Increased server processing requires higher Ethernet data rate I/O interconnects into the server NIC, as well as increased Fibre Channel data rates into the server HBAs to access and deliver external data for the server applications.

Advances in storage technology are increasing the need for higher Fibre Channel data rates as well. In particular, high-speed all flash arrays (AFAs) are being embraced in the storage industry. AFAs provide substantial improved reliability, higher data density, durability, plus reduce energy consumption and rack space. Compared to conventional hard disk drives (HDD), AFAs significantly improve the performance to accelerate data transactions per second with sub-millisecond latency to maximize input/output operations per second (IOPS) throughput.

Brocade has demonstrated—as illustrated here—a 71-percent reduction in response time to access 8G flash storage when using 32G Fibre Channel compared to using 8G Fibre Channel. (Source: “Maximize the All-Flash Data Center with Brocade Gen 6 Fibre Channel,” Brocade, 2017)

Using 32G Fibre Channel (32G FC), Brocade has demonstrated a 71-percent reduction in response time to access 8G flash storage, compared to using 8G FC. By adopting flash, data centers achieve resource efficiencies that allow them to host more IT services and store more data well into the future. The deployment of flash storage is robust. AFAs are quickly replacing legacy HDD-based systems to become the primary enterprise storage solution.

Long-term tracking has shown that the 100-meter channel distance represents nearly 95 percent of deployed OM3 and 90 percent of deployed OM4 channel lengths. For the vast majority of users, a 100-meter channel distance is more than sufficient.

Multimode fiber connectivity distances

Ethernet and Fibre Channel transmission standards develop guidance based on specific criteria that includes technical and commercial feasibility. A primary objective is to deliver economical solutions that meet distance objectives representative of deployed multimode fiber connectivity channel lengths. Corning has tracked and modeled multimode and singlemode fiber connectivity data center channel lengths for an extended period of time. Trends have shown that as Ethernet data rates have increased from 10 to 40 to 100G, and Fibre Channel data rates have increased from 8 to 16 to 32G, the 100-meter channel distance represents approximately 95 percent of deployed OM3 and 90 percent of deployed OM4 channel lengths. In other words, for the vast majority of data center users, a 100-meter channel distance is more than sufficient to meet their needs.

Fibre Channel—OM3 and OM4 connectivity

Fibre Channel transport is essentially tip-to-tip optical connectivity. OM3/OM4 multimode fiber connectivity continues as the leading optical media used in the data center for short-reach distances up to 100-150 meters. 16 GFC and 32 GFC networks using multimode optical fiber trunks are now being deployed. OM3/OM4 multimode fiber enables the utilization of vertical-cavity surface-emitting lasers (VCSELs) to provide synergistic and low-price optical connectivity and electronic solutions.

To date, Fibre Channel has only used small form-factor pluggable (SFP+) transceivers with a duplex LC connector interface with the storage area network (SAN) electronics (server HBA, director switch, and storage). Factory-terminated MTP connectorized trunks are commonly deployed from a central patching area in the main distribution area (MDA) to each area with servers, storage, and SAN directors. In the central patching area, MTP/LC modules are used to breakout the MTP connectors on the trunks into LC duplex ports. LC duplex jumpers are then used to provide the port-to-port connectivity required between any two devices, such as the server to SAN director or storage to SAN director.

It is advantageous to pre-cable the SAN director using high-density harness assemblies to reduce the amount of cable bulk and congestion at director cabinets. The harness LC legs can be staggered to match the port spacing of the individual line cards.

At the server cabinets and storage devices, MTP/LC modules are used to breakout the MTP connector of the trunk into duplex ports for interconnection to the server and storage HBAs using LC duplex jumpers. At the SAN directors, however, it is common to use an MTP/LP harness instead of a module to breakout the trunk MTP connector into LC duplex ports. These high-density harness assemblies reduce the amount of cable bulk and congestion at the director cabinet(s), and the harness LC legs can be staggered to match the port spacing of the individual line cards. This method of pre-cabling of the SAN director optimizes cable management and reduces risk by moving day-to-day move, add, and change work away from the electronic equipment to the passive patching area in the MDA.

The Fibre Channel FC-PI6 Standard includes a 128 GFC data rate that uses a QSFP transceiver with an 8- or 12-fiber MTP interface. The 128 GFC data rate uses parallel optics transmission technology. Parallel optics differs from traditional duplex fiber-optic serial communication in that data is simultaneously transmitted and received over multiple optical fibers. 128 GFC parallel optics require eight OM3 or OM4 fibers with 32 GFC transmission on each fiber: four fibers (4 fibers x 32 GFC/fiber) to transmit (Tx) and four fibers (4 fibers x 32 GFC/fiber) to receive (Rx).

128-Gbit/sec Fibre Channel parallel optics require eight OM3 or OM4 fibers with 32-Gbit/sec Fibre Channel transmission on each fiber.

The 128 GFC data rate is the first Fibre Channel defined parallel optics transmission variant. FC-PI7 activity is ongoing to include a 256 GFC parallel optic variant in the future.

Initial 128 GFC deployments are expected for inter-switch links (ISL) using MTP connectivity throughout the link. Compared to the traditional Fibre Channel architecture with duplex fiber connections at the electronics, parallel transmission optical connectivity will use 8-fiber MTP connectors with adapter panels in lieu of MTP/LC modules for interconnections.

A traditional Fibre Channel architecture uses duplex fiber connections at the electronics; the parallel-optic-based 128-Gbit/sec Fibre Channel will use 8-fiber MTP connectors with adapter panels in lieu of MTP/LC modules for interconnections.

Fibre Channel transmission has a need for speed. Higher Fibre Channel data rates (32/64/128 GFC) are emerging in response to advances to server and storage technologies. Fibre Channel deployment distances in enterprise data centers continue to focus on distances up to 100 meters. OM3/OM4 50/125-µm multimode optical fiber is well-positioned to provide reliable and low-cost connectivity solutions for legacy and future Fibre Channel data rates utilized in storage area networks.

Doug Coleman is manager of technology and standards and a distinguished associate with Corning Incorporated.

Sponsored Recommendations

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

Revolutionize Your Network with Propel Fiber Modules

Oct. 24, 2023
Propel Fiber Modules are your gateway to the future of connectivity.