An addendum to the recently completed TIA-942-A standard will address cabling requirements for data center fabrics that serve to flatten switching architectures.
by Patrick McLaughlin
Designers of structured cabling systems for data center environments work with complicated and challenging network architectures, and are tasked with ensuring these networks' physical layers can support the heavy data flow in these environments. Over the past few years, new architectures have been introduced as solutions to the changing and growing nature of data center networking. As these architectures grow in popularity, and even more of them are developed, the challenges of providing the physical-layer cabling infrastructure for them becomes more of a challenge.
Traditional data center network interconnect topologies include three tiers of switching. Servers connect directly to access switches. At the next tier, each access switch connects to an aggregation switch. And at the third tier, aggregation switches connect to one or more (usually more) core switches. Under this architecture, a signal traveling from one access switch to another access switch may have to pass through three other switches en route (see "Cloud computing, virtualization and cabling infrastructure," June 2012).
This reality has become increasingly challenging for data centers that employ server virtualization. Because virtualization often requires communication among multiple servers in order to carry out a single computing function, the amount of traffic traversing aggregation and core switches increases dramatically in virtualized environments and makes evident the shortcomings of the three-tier architecture, particularly in terms of its ability to scale up to larger networks.
Data center fabrics
In the past few years, alternative topologies have been developed and deployed to, in essence, flatten the architecture of data center networks through architectures that also have the ability to scale up efficiently. These topologies are widely referred to as fabrics. One such fabric, called fat tree, was described in detail in a paper authored by Mohammed Al-Fares, Alexander Loukissas and Amin Vahdat, of the University of California San Diego's computer science and engineering department.
The paper, entitled "A scalable, commodity data center network architecture" uses the term "fat tree" based on the concept that the traditional three-tiered architecture can also be described in terms of a tree, with "a core tier in the root of the tree, an aggregation tier in the middle and an edge tier at the leaves of the tree." The fat-tree architecture also is sometimes referred to as leaf-and-spine. In it, each access (or "leaf") switch connects to every aggregation (or "spine") switch.
The fat-tree topology often is configured so that a group of access and aggregation switches is referred to as a pod.
Fat tree is just one of several fabrics that have been developed in recent years. A full-mesh fabric is one in which every switch is interconnected with every other switch. An interconnected mesh fabric comprises pods that incorporate the full-mesh architecture, as well as another tier of switches to which each pod is connected. Two other fabrics are called centralized switch and virtual switch. The centralized switch fabric calls for all servers to be connected to all switches. The virtual switch fabric is similar, but the centralized switch is actually a collection of interconnected switches that act as a large, single switch.
Each of these architectures has had its proponents and critics as it has been developed and deployed in data center facilities. One example of a branded system seeking to define a new architecture is Juniper Networks' QFabric architecture. In some of the literature Juniper (www.juniper.net) has produced to increase awareness of its QFabric technology, the company has defined the problem of multi-tiered network architectures in the data center and has positioned QFabric as an answer to those problems.
In one document, Juniper stated, "The traditional multitiered data ‘tree' architecture that still dominates data center network design imposes a series of performance penalties that hinder enterprises from realizing the full potential of their investments in virtualization and other technologies.
"This is because the legacy tree architecture is simply too complex to manage and too topologically diverse to optimally leverage these innovations. In other words, the data center network has too many devices deployed in a very stratified, hierarchical manner—an architecture that prevents businesses from taking advantage of the operational efficiencies and savings afforded by virtualization."
The document later adds, "Since the most efficient way for resources to interact is for them to be no more than a single hop away from each other, the ideal next-generation network architecture would directly connect all processing and storage elements in a flat, any-to-any connectivity-based network fabric. Optimized for performance and simplicity, this next-generation architecture would address the latency requirements of today's applications, eliminate complexity of legacy hierarchical architectures, scale elegantly, and support virtualization, cloud computing convergence, and other requirements for the next-generation data center."
Juniper used the hypothetical term "would" to describe such an architecture, but believes it has developed and offers just such a solution with QFabric. The single-tier network "operates, and is managed, like a single, logical, distributed switch," the company says.
The need for structured cabling
In an interview, Juniper's senior product marketing manager for enterprise systems Kishore Inampudi and director of product management for the data center business unit Masum Mir reflected on the QFabric system and the extent to which it requires formal, structured cabling systems. Does the growing trend toward "flattened" architectures in data centers give credence to the idea that point-to-point cabling is sufficient, and a more-tiered, structured approach to cabling infrastructure is becoming less essential?
Juniper's Inampudi and Mir said that standards-based structured cabling is in fact a key to the long-term success of architectures like QFabric. Inampudi explained that the current upgrade trend in access-switch connection speeds from 1 to 10 Gbits/sec "requires higher speed in the core," which is driving a need to "lay out the physical infrastructure to support 40G and 100G in the core."
Mir described a fabric like Juniper's as "a superhighway in the data center." With such a fabric, he said, "Data flow in the data center is not a restraint anymore." Speeds of 10-Gbits/sec will be a starting point, but Mir furthered Inampudi's point about higher speeds and the physical layer that will support them. "The demand for massive data movement in data centers is not slowing down anytime soon," he said. "Cabling infrastructure is built for 10 to 15 years, so what is being installed today will have to last that long," supporting the applications and speeds used throughout that span of time, he explained.
Mir opined that fiber-optic cabling is the medium that will bring data centers into that future. "We're talking about connectivity models where bandwidth might go through iterations—10G today, later 40G and later than that 100G," he noted. In these scenarios, "Connectivity is anywhere-to-anywhere, and must have structure." In addition to being a sound technology path, structured cabling makes sense economically as well, he added. "It allows us to invest as we need, such as when we move from 10G on the rack to 40 or 100G." The QFabric solution, he noted, is built with 40G capability. "We have been working with major cable and infrastructure vendors to ensure that structured cabling is capable of supporting not just the technology we have today, but the technology coming a few years down the road as well. We need a massive amount of bandwidth at the physical layer."
The physical layer that provides that bandwidth has to be reliable in addition to being robust. Structure, compatibility and standards come into play, Mir explained. "Standardization is key," he said. "We want to ensure that with structured cabling, the alignment of connectors is easily understood. A single polarization mismatch can create a disaster. We've seen that with our carrier customers. We have to ensure that the entire ecosystem is talking, in order to make the system as robust as possible."
Inampudi concurred: "Cabling is complex, and any physical-link failure represents a nightmare for customers. The standardization process is giving users more planning tools."
TIA standard addendum
Those users will have even more planning tools at their disposal when the Telecommunications Industry Association's (TIA) TR-42 Telecommunications Cabling Systems Committee completes the work it is carrying out on the first addendum to the TIA-942-A Telecommunications Infrastructure Standard for Data Centers. That standard was recently completed, and likely will be available for purchase in July or August. Even before it is available, work has begun on its first addendum, which will provide cabling guidelines for data center fabrics.
The document originated in the TR-42.1 Subcommittee on Commercial Building Telecommunications Cabling—the same subcommittee that produced the 942-A standard. Jonathan Jew, president of J&M Consultants (www.j-and-m.com) and vice chair of the 42.1 subcommittee, is the author and editor of the addendum. "Our message is that designers should consider the potential impact of data center switch fabrics in their cabling system designs," he said. "They will require much more, and higher-bandwidth, switch-to-switch cabling than the classic three-tier architecture."
In line with the comments from Juniper about the importance of structured cabling systems in these environments, Jew noted, "There are several implementations of data center switch fabrics, but all of them can be supported using the structured cabling scheme specified in ANSI/TIA-942-A."
In its current draft form the addendum includes drawings of the fat-tree, full-mesh, interconnected-meshes, centralized-switch and virtual-switch fabrics, as well as drawings that show examples of how structured cabling can be used to support these architectures.
To this point the addendum has traveled smoothly through the consensus process by which the TIA establishes standards, addenda and telecommunications systems bulletins. It is possible that Addendum 1 to TIA-942-A will be published in 2012.
Data center networks are dynamic environments that are growing in complexity, driven by realities such as virtualization, the efficiency of building a fabric supporting multiple computing types, and the ever-increasing capacity required to accommodate these functions. As the logical and physical layouts of data center networks evolve, it is imperative that the cabling infrastructures at these networks' physical layers remain flexible enough to adapt to changes, while they provide the throughput capacity needed for current- and next-generation networking speeds.
Organizations that develop standards for networking and cabling systems continue to set the pace for these technologies and capabilities.. ::
Patrick McLaughlin is our chief editor.
Standard addendum on cabling for data center fabrics proceeds forward
During the week of June 4-8, shortly before this issue went to press, the committee responsible for the development of Addendum 1 to TIA-942-A met and reported additional progress on the document. During that week's meeting, the TR-42.1 group added some definitions to those that already were in the document, made minor edits, and added example drawings of how to use structured cabling to support various architectures. The most recent round of balloting on the addendum did not yield a single "no" vote. The next step for the addendum is what is known as industry balloting. Other standards-making groups, IEEE 802.3 and ANSI/INCITS T11 (Fibre Channel) will be included in that round of balloting. That next round could be the final round of balloting, leading to the optimism for publication this year. -P.M.