The flattening of data center networks

Sept. 3, 2014
The ToR-EoR discussion is just one consideration in a changing landscape for the design of data center networks and cabling systems.

From the September, 2014 Issue of Cabling Installation & Maintenance Magazine

The ToR-EoR discussion is just one consideration in a changing landscape for the design of data center networks and cabling systems.

Over the past several months the concepts of top-of-rack (ToR) and end-of-row (EoR) data center network layouts have been the source of many seminar presentations, articles in this magazine and others, and technical papers in the cabling industry as well as the wider networking industry. Considerations of when and where to use each approach include management of network moves, adds and changes; cooling the data center facility; network scalability and, of course, cost among others. But the question of whether to use ToR or EoR-while very close to the heart of professionals in the cabling industry-is one part of a broader shift taking place in networking. Specifically, data center networks are getting flatter in terms of their switching architectures.

In essence, the flattening of data center network architectures eliminates at least one "hop" that data makes when moving from one server to another server. The traditional switching architecture is commonly called "three-tier." Working backward from the servers, those switch tiers include access switches, aggregation switches, and core switches.

In late July, Cisco turned on the first deployment of its Application Centric Infrastructure (ACI) fabric at its engineering data center facility in San Jose, CA. Shown here is the Nexus 9508 spine switch, several of which combined with Nexus 9396 leaf switches, the Cisco Application Policy Infrastructure Controller (APIC), as well as application programming interfaces to achieve the ACI fabric deployment.

Three-tier vs. fat-tree

As Gary Bernstein, RCDD, DCDC, senior director of product management at Leviton (www.leviton.com/networksolutions) has pointed out in an article he authored for this magazine, "Three-tier architecture comes with a number of disadvantages, including higher latency and higher energy requirements. New solutions are needed to optimize performance." ("New switch architectures' impact on 40/100G data center migration," Feburary 2014)

In a web-based seminar that is available for on-demand viewing, Siemon's (www.siemon.com) global director for data center solutions and services, Carrie Higbie, explains that this three-tier architecture produces a significant amount of what is called "north-south" traffic-essentially traffic that is flowing "up and down" ("north and south") through the switch tiers before ultimately arriving at its destination server. Removing at least one layer of switches reduces the amount of this north-south traffic, enabling a more "east-west" path for the data between its source server and its destination server.

Higbie points out that the well-intentioned three-tier architecture has been shown to have flaws, including the latency and energy-use issues that Bernstein also discusses, as well as others. "Three-tier was supposed to be a big problem-solver," Higbie says, "but most data centers have found there is a lot of wasteful port spend." Among that inefficient spend is the necessity to establish inactive links, particularly between the access and aggregation switches. "As you set out primary and secondary networks, only one of these can be active at a time," she explains. "You're really using about 50 percent of the spend on ports that are just going to be there in case the [primary] link goes down." Furthermore, she says, when a primary link does go down and the inactive/backup link must be used, "the switch stops traffic until it can bring up the secondary link. Depending on the data center, that wait could run from a few seconds up to a minute. That's not acceptable."

Alternative architectures-network fabrics-are now regularly being deployed rather than three-tier architectures. In Bernstein's article as well as Higbie's presentation, multiple fabric types are described and discussed, including full-mesh, interconnected mesh, centralized and virtualized switch. But the fabric that appears to be leading the market race is the fat-tree, which is also commonly called leaf-spine. Berstein explains, "Fat-tree architecture features multiple connections between interconnection switches (spine switches) and access switches (leaf switches) to support high-performance computer clustering. In addition to flattening and scaling out Layer 2 networks at the edge, fat-tree architecture also creates a non-blocking, low-latency fabric. This type of switch architecture is typically implemented in large data centers."

In a fat-tree architecture, the volume of north-south traffic flow is significantly reduced compared to what takes place in three-tier architectures. Fat-tree achieves more-efficient east-west, server-to-server communication. Illustration source: ANSI/TIA-942-A-1

Ties to virtualization

The architecture comprises two layers of switching-access switches, which connect to servers, then interconnection switches, which connect to the access switches. Within a TIA-942-A-based data center arrangement, servers reside in the equipment distribution area (EDA), while access switches can reside in either the horizontal distribution area (HDA) for end-of-row setups or in the EDA for top-of-rack setups. Interconnection switches reside in the main distribution area (MDA), or potentially in the intermediate distribution area (IDA) when an IDA exists.

In Siemon's web seminar, Higbie points out that in many instances when a data center network makes the transition from three-tiered switching to a fat-tree fabric, "What used to be a switch is now a pass-through fiber connection." This, she reminds everyone, requires network administrators to "pay attention to link loss in channels. With low-loss connectors, you can increase the number of connections you have in these channels."

The ability to efficiently achieve server-to-server communication is particularly important when a network employs virtualization. Higbie explains, "When you virtualize and have more server-to-server traffic, and you have SAN [storage area network]-to-SAN movement, approximately 80 percent of your traffic stays within the data center. You want to be sure you have communications that are very conducive to servers talking to other servers, VM [virtual machine] moves, without having to go through a number of hops." The flattening of data center networks and emergence of virtualization are inherently related.

Cisco's home cooking

Recently networking colossus Cisco (www.cisco.com) announced that at one of its own engineering facilities in San Jose, CA, it turned on the first deployment of its Application Centric Infrastructure (ACI) fabric. ACI is Cisco's technology that achieves virtualization. It did so using a spine-leaf (fat-tree) architecture. In a blog post and a video discussion, Cisco distinguished IT engineer Sidney Morgan explained the feat. "By using ACI fabric to simplify and flatten the data center network, we can reduce network operating costs as much as 55 percent and incident management roughly 20 percent."

He described the turning on of ACI as the company's "first major step toward adopting ACI fabric globally. For the deployment, we moved from a Layer 2/Layer 3 pod architecture to a spine-and-leaf architecture. In this design, every leaf switch connects to every spine switch in the fabric, helping to ensure that application nodes are at most only two hops from each other or from IP-based storage. Spine-leaf is optimal in mixed data center environments of hypervisors and physical servers so that traffic can move in an efficient east-west direction versus north-to-south.

"The deployment includes the Cisco Application Policy Infrastructure Controller (APIC), Nexus 9508 spine switches, Nexus 9396 leaf switches, and open northbound and southbound APIs for integration into many platforms for automation, orchestration, and communication with Layer 4 through 7 and virtual switching devices."

Morgan further explained that his team ran the 9508 switch in standalone mode for 90 days before converting it to fabric mode in late July. "We're migrating from a Layer 2/Layer 3 pod architecture to a flat spine-leaf architecture that fundamentally removes identity and location, and allows every node in this data center to communicate with every other node, improving overall utilization."

Many data centers deployed a spine-leaf/fat-tree architecture before Cisco did so using its own equipment this summer. And many more are destined to do so in the future.

Patrick McLaughlin is our chief editor.

Sponsored Recommendations

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

Revolutionize Your Network with Propel Fiber Modules

Oct. 24, 2023
Propel Fiber Modules are your gateway to the future of connectivity.