The challenges of migrating optical networks to 400G

It could be a bumpy transition for early adopters.

Early adopters going all-in for 400G face a bumpy transition; waiting for lower-power next-generation technology could put an ace up your sleeve.
Early adopters going all-in for 400G face a bumpy transition; waiting for lower-power next-generation technology could put an ace up your sleeve.

By Koby Reshef,

The telecom industry is anxiously awaiting the benefits that 400G capacity will bring to existing and future fiber network deployments. Nearly every business is leveraging the latest in digital offerings to remain competitive in their respective markets, which exponentially increases the amount of data transported across the network. 400G is certainly the answer to these increasing data demands, at least in the present, but there will be an initial struggle on the network backbone in supporting these initiatives and fulfilling the promise of higher-capacity transport.

400G is not a natural extension to existing network infrastructure, and requires taking into account new restrictions and a redesign of the optical network infrastructure. Supporting 40G, 100G and 200G capacity did not require special changes, due to their wavelength spectral bandwidth; however, 400G capacity over a single wavelength with its high baud rate is simply too spectrally wide to pass through the 50-GHz filters and fixed grid ROADMs (reconfigurable optical add-drop multiplexers). A new “runway” is required to reap the benefits of this new technology.

The cost associated with a complete network rebuild to upgrade to 400G is substantial, and will often deter carriers and enterprises from opting to upgrade their existing infrastructure. Companies must be prepared for many initial challenges, many of which will be alleviated over the next couple years.

Current solutions, technologies costly

The foundation of optical networking infrastructure includes coherent optical transceivers and digital signal processing (DSP), mux/demux, ROADM and optical amplifiers, all of which must be able to support 400G capacity. Today’s ecosystem of 400G transceivers and DSP are power-hungry, do not support the latest MSA (multi-source agreement) standards, and are developed uniquely by different vendors using their proprietary technology. The next-generation DSP will be based on low-power, low-cost 7-nm technology, and will support standard forward error correction (FEC) modes for interoperability. Next-generation optical transceivers will be small-form-factor pluggable (SFP) modules and sourced by multiple source vendors, which will lead to mass deployment and cost reduction.

The network management system (NMS) controlling the ROADM devices, which are tasked to block, pass or redirect wavelengths across the network, need to accommodate for the 400G bandwidth restrictions and add the complex flex grid spectrum management. For large networks, it can be costly to rip and replace.

Companies also will have significantly higher amplification requirements to meet the link budget, which accounts for the gains and losses from the transmitter through the fiber to the receiver. 50-GHz channels spacing mux/demux is standard for 40/100/200G networks, but is not compatible with 400G.

Most importantly, in the market’s current state, upgrading to 400G-capable products means sacrificing the flexibility many network operators are accustomed to in exchange for the additional capacity. Prior to 400G, optical fiber original equipment manufacturers (OEMs) created products that reduce rack space and optimize power to keep opex and capex low.

The current generation of 400G products does not offer modularity and power-saving capabilities, mainly due to the bulky non-pluggable optical transceivers. As a result, telecommunications and data center racks will become much larger and likely require an upgrade to better products in the coming years that support similar pay-as-you-grow architecture.

Initial use cases will be small scale

Despite early adopter warnings, there are plenty of opportunities for 400G in greenfield or smaller-scale deployments. Greenfield initiatives don’t have the baggage of existing network equipment that must be decommissioned and rebuilt, and smaller networks, such as short-haul and some metro, simply have less equipment to replace (consider how many ROADM and optical amplifiers would need to be replaced in a long-haul deployment to realize the investment).

Data center interconnect (DCI) is the most likely candidate for early 400G adoption because additional capacity is a primary need and often deployed over shorter distances when compared to other applications.

Hyperscale facilities, as they are commonly called, like Google, Amazon and Microsoft are examples of companies likely to be building new data centers and upgrading existing facilities to 400G. Companies sharing that type of profile have greater incentive to jump on the 400G train as early as possible to take advantage of the increased service level agreements (SLAs) they can offer the “residents” of their data centers.

Although the 400G revolution is upon us, it will only reach mass production when the power and cost of the new 7-nm DSP and 64G baud optics are ready, which is expected in Q3 or Q4 of 2020. Until then, early adopters can expect high-power solutions, less flexibility, larger footprints, lack of standardization, and higher cost as a result. It’s important for organizations to consider their needs for 400G carefully before undergoing what could be a bumpy transition.

Koby Reshef is chief executive officer of PacketLight Networks.

More in Data Center