The road beyond 200G
Throughput requirements for hyperscale data centers and cloud service providers have pushed to 400G. There is more than one way to reach such a goal, as we explain.
The throughput requirements of the world’s most-demanding data networks have pushed to 400G. As we have learned, there is more than one way to reach such a goal.
Ethernet speeds continue to climb in order to serve the computing needs of today’s most-demanding users, namely hyperscale data centers and service providers. According to the Ethernet Alliance, unique networking architectures within warehouse-scale data centers have driven multiple long- and short-wavelength optical solutions at 100, 200, and 400 Gigabits. The bandwidth demands of hyperscale data centers and service providers continue to grow exponentially and in a similar direction, which often blurs the line between the two. Nonetheless, Ethernet specifications developed within the Institute of Electrical and Electronics Engineers (IEEE) 802.3 Working Group enable optical transmission at multiple-hundred-gigabit-per-second speeds. In this article we’ll take a look at the road ahead for Ethernet at these speeds.
In December 2018 the Ethernet Alliance held its latest Higher Speed Networking Plugfest, for port data rates ranging from 25 to 400 Gbits/sec. Held at the University of New Hampshire InterOperability Laboratory (UNH-IOL), the weeklong event allowed equipment manufacturers, test and measurement professionals, and others to test and improve the interoperability of their solutions, the alliance explained when announcing the event. The alliance previously announced a successful Higher Speed Networking Plugfest in August 2018; the short duration between the two plugfests is indicative of the rate at which higher-speed equipment is being developed and refined, according to the alliance.
Dave Chalupsky, Ethernet Alliance board member and a network product architect for Intel Corporation, chairs the alliance’s plugfests. Leading up to the December event, he commented, “With so much technology development underway and transpiring on different timeframes, industry is more and more often demanding opportunities for trustworthy interoperability testing for their solutions. This is a role for which the Ethernet Alliance has grown globally respected. Ethernet is amid significant and historic growth, with so many new standards activities coming out over the last two years and still rolling out. Our plugfests allow Ethernet Alliance members to more rapidly iterate on product development and confidently deliver multi-vendor-interoperable products that their customers can rely on from day one.”
The December plugfest encompassed “technologies based on both recently completed and soon-to-be-ratified standards,” the alliance added. “With the recent completion and/or expected approval of standards such as IEEE 802.3bs and IEEE 802.3cd—as well as specifications such as 100G Lambda’s MSA and 400G-FR4, which are based on 100-Gbit/sec PAM4 signaling and designed for data center connectivity over links of up to 2 kilometers—meeting demand for greater Ethernet speeds while maintaining the technology’s legacy of proven interoperability is increasingly important.”
Among the equipment tested during the plugfest were Ethernet physical layer transceivers (PHYs), network interface controllers (NICs), switches, test and measurement solutions, as well as optical and copper media, at speeds of 25, 50, 100, 200, and 400 Gbits/sec.
Cisco’s Nexus 3408-S is a 4-RU 400-Gbit Ethernet switch, announced in October 2018 and expected to be available in the first half of 2019. The company says the switch’s 8-slot chassis is optimized with compact aggregation to run as a leaf or spine.
High-speed standards from IEEE
The IEEE approved the 802.3bs standard in late 2017 and published the document in early 2018. It specifies 200- and 400-Gbit/sec Ethernet. At the time of its publication, the IEEE explained that the standard “addresses the growing diverse bandwidth requirements and cost considerations from network providers needed to meet the burgeoning high-bandwidth requirements driving a range of different applications areas, such as cloud-scale data centers, internet exchanges, colocation services, and broadband wirelessinfrastructure.”
Also at that time, the Ethernet Alliance’s chair, John D’Ambrosia, who also chairs the IEEE 802.3bs Task Force and is a senior principal engineer with Futurewei, commented, “When you consider the notable move to 25- to 50-Gbit/sec Ethernet for servers and 100-Gbit/sec Ethernet for networks, it has become clear through proactive engagement with industry that 200-Gbit/sec and 400-Gbit/sec Ethernet is needed to meet growing capacity demand for high-bandwidth services today and in the future. The publication of IEEE 802.3bs represents a nearly five-year endeavor to ensure Ethernet’s continuing support of the accelerating curve for high bandwidth that can support ongoing robust industry growth and expansion.”
Just as the December 2018 plugfest was set to get underway, the IEEE approved the 802.3cd standard. That standard specifies iterations of 50-, 100-, and 200-Gbit/sec Ethernet. Not long after the 802.3cd standard project was approved and work on the standard began, Siemon provided detail on the effort in its Standards Informant blog. Specifically, the company explained. “Server interconnects in the data center, which represent the highest number of equipment connections, require cost-effective solutions. Advances in cost-optimized single-lane solutions and higher-speed multi-lane transmission solutions warrant reevaluating the signaling technology for 50- and 100-Gbit/sec Ethernet. In addition, servers virtualizing more applications are driving additional bandwidth into the network and network uplinks need to progress to higher speeds to match server speeds. 200 Gbits/sec can support network infrastructure and oversubscription rates similar to 40 Gbits/sec and 100 Gbits/sec as servers migrate from 25 to 50 Gbits/sec, while also enabling data center fabric topology. This amendment [IEEE 802.3cd] will define 12 PHY specifications and management parameters for 50-, 100-, and 200-Gbit/sec operation over backplanes and twinaxial coper cables. These solutions will support optional Energy Efficient Ethernet (EEE).”
The Standards Informant blog identified the following PHY specifications and attachment interfaces that were under development as part of IEEE 802.3cd.
- 50GBase-CR: 50-Gbit/sec transmission over one lane (2 twisted coaxial pairs) of shielded twinaxial copper cables, with reach up to at least 3 meters.
- 50GBase-FR: 50-Gbit/sec serial transmission over one wavelength (2 fibers total) for operation over singlemode optical fiber cabling with reach up to at least 2 kilometers.
- 50GBase-KR: 50-Gbit/sec transmission over one lane of an electrical backplane, with a total insertion loss of less than 30 dB at 13.28125 GHz.
- 50GBase-LR: 50-Gbit/sec serial transmission over one wavelength (2 fibers total) for operation over singlemode optical fiber cabling with reach up to at least 10 km.
- 50GBase-SR: 50-Gbit/sec transmission over one lane (2 fibers total) for operation over multimode fiber-optic cabling with reach up to at least 100 m.
- 50 Gigabit Attachment Unit Interface (50GAUI-I): 50-Gbit/sec one-lane interface used for chip-to-chip or chip-to-moduleinterconnections.
- 50 Gigabit Attachment Unit Interface (50GAUI-2, LAUI-2): 50-Gbit/sec two-lane interfaces used for chip-to-chip or chip-to-moduleinterconnections.
- 100GBase-CR2: 100-Gbit/sec transmission over two lanes (4 twisted coaxial pairs) of shielded twinaxial copper cabling, with reach up to at least 3 m.
- 100GBase-DR: 100-Gbit/sec serial transmission over one wavelength (2 fibers total) for operation over singlemode optical fiber cabling with reach up to at least 500 m.
- 100GBase-KR2: 100-Gbit/sec transmission over two lanes of an electrical backplane, with a total insertion loss of less than 30 dB at 13.28125 GHz.
- 100GBase-SR2: 100-Gbit/sec transmission over two lanes (4 fibers total) for operation over multimode fiber-optic cabling with reach up to at least 100 m.
- 200GBase-CR4: 200-Gbit/sec transmission over four lanes (8 twisted coaxial pairs) of shielded twinaxial copper cabling, with reach up to at least 3 m.
- 200GBase-KR4: 200-Gbit/sec transmission over four lanes of an electrical backplane, with a total insertion loss of less than 30 dB at 13.28125 GHz.
- 200GBase-SR4: 200-Gbit/sec transmission over four lanes (8 fibers total) for operation over multimode fiber-optic cabling with reach up to at least 100 m.
Recently, in his capacity as Ethernet Alliance chair, D’Ambrosia authored an article titled “Ethernet in 2019.” Within that article he named several efforts underway within the IEEE 802.3 Ethernet Working Group. The following is taken directly from D’Ambrosia’s article.
- IEEE P802.3cn 50-, 200-, and 400-Gbit/sec over Singlemode Fiber: This fast-tracked project will leverage the PAM4 technology developed to support links at these rates currently up to 10 km and build upon them to expand their reach to 40 km.
- IEEE P802.3ct 100- and 400-Gbit/sec over DWDM Systems: This effort will see Ethernet evolve to support reaches up to 80 km over a DWDM (dense wavelength division multiplexing) system. While the main drivers for this effort have been multi-service operators (MSOs) and data center interconnect (DCI), it is easy to see how these solutions could be used for future mobile aggregation and core backhaul.
D’Ambrosia’s “Ethernet in 2019” article focused on the expectation that mobile applications, including the development and implementation of 5G mobile networks, will drive further development of Ethernet technologies andcapabilities.
What’s driving speeds
In an interview with Cabling Installation & Maintenance, also in his capacity as Ethernet Alliance Chair, D’Ambrosia observed that regarding higher-speed Ethernet capabilities and their usefulness to different users, “The toolbox is being refreshed in a number of different ways. We have efforts to do 100G PAM-4 on four separate lambdas—at 500 meters but also at 2-plus kilometers. Another task force is working on 100G electrical. There’s going to be an 802.3ct Task Force working on 80-kilometer solutions for 100- and 400-Gbit DWDM.”
In that interview D’Ambrosia also explained that some forthcoming applications—including wireless applications—are likely to place significant demand on network throughput capacity, driving speeds higher and also demanding alternate paths to existing higher speeds. A current bandwidth-assessment project is underway, and is part of the IEEE’s effort to foresee some of these demands. The bandwidth-assessment exercise “is fascinating,” D’Ambrosia noted. “We don’t know where all the applications will be coming from,” he noted. “But one area is automotive. They have concerns about wireless, but if they solve those concerns, the opportunities are huge. Right now we can’t see how large these challenges are going to be … [however] … What happens when we have 5G and connected cars? Connected cars that are autonomous? Once you pull at the strings of conversations like these, you realize applications are not slowing down anytime soon.”
Turning his attention to the data center, D’Ambrosia explained that he believes more in evolution than revolution. “The IEEE is talking about two lambdas across multimode optical fibers at 50G. For a moment, let’s forget about the technical feasibility today. If I am doing 50G over singlemode today, I don’t think we should be dismissing the possibility of 100G on multimode. We shouldn’t dismiss things just because we don’t think they’re possible today.
“We know there are markets for both singlemode and multimode,” he said. “One side will say, ‘My solution is better,’ and have a story to tell,” he acknowledged. But, he insisted, “The strength of Ethernet is its breadth. There may be more than one way to do something. That’s not a bad thing.”
Not a bad thing at all, but certainly enough to keep debates raging within the realm of professionals who specify, design, install, test and manage cabling systems for some of the world’s most demanding networks. We have chronicled some of the “My-solution-is-better” arguments in these pages and on our website cablinginstall.com. We’ll continue to do so, in an effort to be one agent that allows these professionals—you—to make informed decisions, after wading through multiple options.
Cisco’s 400G switches
Significantly for the professionals working within the realm of extremely high transmission speeds, October 31 marked Cisco’s announcement of its 400-Gigabit Ethernet switches. At the time, the company said it would begin early field trials with customers in December 2018, and anticipated the availability of 400G Nexus switches in the first half of calendar year 2019.
In its announcement of the switches, Cisco said, “Bandwidth and scale … are two of the biggest challenges facing data center customers today. The new 400G switches … provide four times the bandwidth and four times the scale of existing switches without using four times the power. Since the new switches are built on Cisco’s Nexus portfolio, customers can choose to deploy 400G in the way that best meets their needs. They can be used on their own or in combination with Cisco’s security, automation, visibility and analytics software.”
Roland Acra, senior vice president and general manager of Cisco’s data center business group, stated at the time, “Our 400G switches do more than just bring a new level of speed to customers. They support the delivery of the signature capabilities that customers expect for their modern data-driven workloads and cloud environments. Superfast policy, segmentation and whitelisting. Real-time visibility into packets, flows and events. Smart buffering for big data and machine learning workloads. The ability to prioritize critical traffic on-demand. These are the things that Cisco has delivered to our customers across multiple generations of Nexus switches. And we are doing so again with our new 400G portfolio.”
Also at the time the switches were announced, Cisco’s vice president for product management of the Nexus and ACI product line, Thomas Scheibe, stated, “Cisco is not in the business of building just bigger, faster networks. Our core mission is to built intent-based networks capable of capturing business intent and activating and assuring it network-wide. We enable our customers to make sure their business intent is actually delivered in the network.
“Cisco is working actively with larger industry ecosystem and standards organizations to drive standardization for and interoperability of the 400G technology building blocks.”
Scheibe added that its 400G switches were part of the Ethernet Alliance’s plugfest. He also said, “Cisco is working to take BiDirectional (BiDi) optical networking to the 400-Gbit Ethernet realm. The hallmark of optical BiDi transceivers is the ability to re-use existing fiber cabling infrastructure when upgrading to higher-speed interfaces. It’s the old ‘do-more-with-the-same’ principle that has made BiDi technology so popular in the optical networking market today.”
He summed up by saying, “The Nexus switches deliver choice and flexibility for webscale customers, for ACI [application centric infrastructure] leaf and spine architectures, and for edge data center deployments. And, they are designed for the next frontier of cloud networking—standards-based, with investment protection for both backward and forwardcompatibility.”
In mid-February, Google’s chief executive officer Sundar Pichai announced the company plans to invest more than $13 billion throughout 2019 in data centers and offices across the U.S. “Our new data center investments, in particular, will enhance our ability to provide the fastest and most-reliable services for all our users and customers,” he said.
Widely recognized as one of the handful of cloudscale data center operators constantly pushing the need for bandwidth and speed forward, Google is like its hyperscale peers in that it is looking at accelerating rather than slowing down. As the Ethernet Alliance’s D’Ambrosia pointed out, there’s often more than one way to accomplish an objective—even when that objective tops out at 400 Gbits/sec.u