Long-wavelength optical networking brings singlemode fiber into data centers

May 1, 2017
Hyperscale data centers are the most prominent, but not the only data center facilities deploying singlemode fiber-optic cabling systems.

By Patrick McLaughlin

A prevailing truth about optical networking within data centers is that deploying long-wavelength optics and singlemode fiber-optic cabling is more costly than deploying short-wavelength optics and multimode fiber-optic cabling. However, over the past few years, the price of long-wave optics has fallen, bringing them close to parity with short-wave optics.

The world’s largest data centers, commonly referred to as hyperscales, have been pioneers in going singlemode. Jim Hayes, president of The Fiber Optic Association, explained that hyperscale facilities “are following the Open Compute Project open-source developments, have mostly upgraded to singlemode fiber, are already experimenting with 200/400-Gigabit and are looking for 1 Terabit.”

Open Compute Project

On the topic of the Open Computer Project (OCP), Anthony Haupt, data center solutions architect with CommScope, explained, “The Open Computer Project’s stated mission is to design and enable the delivery of the most efficient server, storage, and data center hardware design.”

The initiative officially was launched in 2011 by Facebook - which had been trying to design the world’s most-efficient data center since 2009 - along with Intel, Rackspace, Goldman Sachs and Andy Bechtolsheim. The OCP explains that these five founding members “hoped to create a movement in the hardware space that would bring about the same kind of creativity and collaboration we see in open source software.”

CommScope’s Haupt explained that out of the OCP have come “a number of whitebox designs to meet the needs of end users. The most important takeaway is that OCP changed the norm. In the past, the norm was that OEMs would push standards to users. Users would either embrace them or wait for the next standard to arrive. Now, end-users are demanding standards and products to meet their needs. It’s turned the paradigm on its head.

“Whether or not Open Compute dictates or influences media type is an important question that gets asked frequently,” Haupt continued. “Open Compute does not necessarily dictate media type. But it has lent a voice to large end users [and in doing so] exposed a lot of voids, which brought technologies to market. In part, this has contributed to unprecedented growth in cloud computing over the past five years - and with that, increases in transmission speeds.”

For example, he pointed out, five years ago 40-Gbit/sec was being talked about. Last year, 100-Gbit/sec was implemented in real-life networks. Echoing Hayes of the FOA, Haupt added, “400G can’t come fast enough” for some hyperscale data centers. “Beyond 400G we are seeing a variety of speeds pop up in standards bodies.”

Headwinds and tailwinds

With large data centers eagerly looking at speeds of 200 or 400 Gbits/sec, they look closely at the practicality of implementing those speeds over short-wave/multimode versus long-wave/singlemode constructions. Haupt dissected these options, providing analysis of the headwinds and tailwinds for each media type in these environments.

“If you look at the IEEE standard 802.3bs [400-Gbit/sec], there is one protocol for multimode - 400GBase-SR16,” he began. “This poses two concerns. First is the 100-meter distance limitation, which does not suffice for a lot of end users, in particular hyperscales. The second concern is that it uses a 32-fiber MPO design. The embedded base of structured cabling is Base 8, 12 or 24. This poses serious challenges for the embedded base of products in the market today.”

Those facts notwithstanding, short-wave/multimode 400G does enjoy some tailwinds, as Haupt explained: “We are seeing farther distance than had been anticipated for 100Base-SR4. Originally that standard had a distance limitation of 70 meters over Om3 and 100 meters over Om4. Finisar announced a module that reaches 300 meters on Om3 and 400 meters on Om4. And we’re seeing optical transceiver manufacturers coming up with new technologies that are improving distance limitations. That will continue to evolve because of the large embedded base of multimode fiber today.”

Likewise, long-wave/singlemode implementation of 400G faces tailwinds and headwinds. Tailwinds first: “The standards path offers more options depending on distance, including 200/400GBase-DR, FR, and LR. Also, the transceivers have more flexibility with the transmission method itself. Parallel and serial options exist. And the DR4 eight-fiber solution takes advantage of existing MPO-based 8-, 12-, and 24-fiber cabling that exists today.”

As for headwinds facing long-wave/singlemode 200- and 400-Gbit transmission, Haupt explained, “Although there is no dispute that providers have brought down the price of singlemode optics substantially, they are still nominally to moderately costlier than multimode optics, depending on the end user’s buying power. If you’re a large buyer, you may get close to parity. But if you’re a smaller end user, that’s likely not possible.

“There’s also availability,” he pointed out. “Users need to build data centers more and more quickly. The availability of the latest singlemode transceivers can become a challenge with the latest, highest speeds.” Availability goes hand-in-hand with pricing: “If you’re not a hyperscale, your buying power is pitted against theirs,” Haupt explained.

Haupt outlined five fairly common characteristics among the enterprise data center facilities that have deployed singlemode fiber.

Square footage - Generally, Haupt says, “their single-floor space is greater than 80,000 square feet, and the spine runs exceed the distance limitations of multimode fiber.” That number is layout-dependent, but 80,000 square feet of single-floor space appears to be the tipping point.

Required speeds - “About 95 percent of those deploying singlemode today will be deploying 100G within the next year,” he said. “They’re already looking toward the next speed of 200 or 400G.”

Applications drive tangible and significant revenue - In these facilities, Haupt observed, if the data center goes down, the business operation is materially impacted and the company’s bottom line is directly affected.

Long-term view of the data center - “The data center forms the foundation of what these businesses do,” Haupt said, adding that this characteristic dovetails with the previous one about the data center facility driving tangible revenue.

FOMO - The fear of missing out (FOMO) factor “is not possessed by all,” Haupt declared, “but some fall into this camp. They see that hyperscales have been extremely boisterous about driving down the cost of singlemode optics, and some customers do not want to be behind the curve.”

Getting hands-on

The decision to use long-wavelength optics and singlemode fiber-optic cabling, or short-wavelength optics and multimode fiber-optic cabling, does not have to be an all-or-nothing consideration. “Many perceive the necessity to pick one side, and see this choice as binary and polarizing,” Haupt said. “It doesn’t need to be that way. The world is not going to be entirely singlemode tomorrow … There is still a good mix of both in the market, and still more multimode than singlemode today in the enterprise market. Singlemode is growing more rapidly than multimode, but multimode remains the majority. Optics shape the landscape, and it’s important to understand where they are today and where they’re going tomorrow.”

He also stressed the importance of the practical, hands-on work with the physical-layer cabling that will be essential for installers and users. Some find working with multimode to be easier than working with singlemode. “As a practical matter, the user has to look at needs, budget, and installation capabilities,” Haupt pointed out. “Often we get focused on the technology that’s in the box. We need to consider how we work with that technology once it comes out of the box,” he concluded.

To that end, Jim Hayes and the FOA have stayed on top of the goings-on. “We’ve been following the data center field very closely because several of our instructors are contracted to teach for some of mega-center owners,” Hayes said. “The bottom line is, A) data centers are being upgraded too fast for standards bodies to keep up - 18- to 24-month cycles, and B) neither the cabling industry nor switch and server providers drive standards; large data center owners do.”

Patrick McLaughlin is our chief editor.

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

Revolutionize Your Network with Propel Fiber Modules

Oct. 24, 2023
Propel Fiber Modules are your gateway to the future of connectivity.