As part of an overall cable-management system, the use of enclosures can either help or hurt airflow in high-density data centers.
BY PATRICK MCLAUGHLIN
Regardless of what you might hear on national news outlets about sputtering, not-yet-recovering real estate markets around the country, one statement that was true pre-recession remains true today: The real estate, or physical space, within a data center is a coveted asset. Data center managers, whether they operate collocation or standalone facilities, are charged with fitting the maximum possible computing power within the physical space available. The consequences of this need for higher-density computing power have been well documented, here and in many other places, and most dramatically include the huge task of managing heat loads within data center facilities.
Lylette Maconald, training program manager with Legrand Ortronics (www.ortronics.com), explained, "Because we're trying to save space and maximize the physical real estate in the data center, the density in racks and cabinets has gone up significantly, averaging 20 servers per rack. What that means to us is, not only are powering requirements going up exponentially, but also energy costs associated with it."
She pointed to a Gartner (www.gartner.com) study that said energy costs from two racks filled with servers can exceed $105,000 annually. Macdonald then pointed out that this density raises challenges at the network's physical layer that, though potentially overlooked, can adversely affect a facility's cooling techniques. "What we've looked at historically," she noted, "over the past decade servers and storage equipment have typically been changed every two to five years. When we get into environments that have larger switches and routers, the turnover of that equipment is a little longer, more like five to seven years. But there is a misalignment between network equipment and the capacity of cabling required. It is changing much more rapidly than the physical layer that is designed to support all these migrating systems."
Helping or hindering?
As an organization, she explained, Legrand has examined some of the elements of cable management that can threaten network performance. "This absolutely includes proper airflow, so we're not impeding the cooling requirements. In addition to minimizing impact on consumption of energy, we want to make sure we're allowing network equipment to work at its maximum bandwidth capacity, and network performance is not impacted. We have flexible cable-management solutions available that do support more energy-efficient environments and can handle the density being thrust upon us."
Legrand has developed a concept called "Layer Zero," something of a takeoff on the fact that a cabling infrastructure resides on a network's layer one, or physical layer, in accordance with the OSI network model. While the cabling system supports network operations, the hardware that supports the cabling system is at the heart of Layer Zero. Legrand describes the concept as "a new foundation for the OSI model to address the critical role that infrastructure plays in network performance and provide a new level of stability to the network. This innovative approach to network design emphasizes best practices in pathway and physical support design to maximize network performance in data center or LAN environments." Legrand offers a set of products, including racks and cabinets from Ortronics, in its Layer Zero solution set.
|This version of the N-Series TeraFrame network cabinet from Chatsworth Products is engineered specifically to support the Cisco Nexus 7018 switch, while also supporting a mix of patch panels and fiber enclosures above the switch. The enclosure's integrated network switch exhaust duct captures and guides the hot exhaust from the side of the switch out the rear of the enclosure, ideal for hot/cold-aisle setups.|
Macdonald summarized some of what Legrand found through its analysis of data center management, which ultimately led to the creation of the Layer Zero concept. "Often when we look at large projects, particularly data center environments, budgets are pretty well protected when it comes to network equipment," she said. "But users start to try to conserve money in cable-management solutions. The physical layer, including cable management and pathways, has always been seen as a necessary evil but not something that got much attention. Now, we're finding out that we're creating air dams and having a negative effect on network performance." The Layer Zero concept and brand name is in part an effort to bring more attention to that often-overlooked part of a network. As such, part of the effort was to "define ways we can provide better practices that will support this rapidly expanding communications network."
Legrand boiled its findings down to seven elements–each of which provides either an opportunity to enhance, or a threat to deter from, a data center's overall performance. The seven elements are as follows.
Designers and managers of data center cabling systems can take a number of measures to help ensure that the physical infrastructure–traditionally layer one and also Legrand's branded "Layer Zero"–at least do not harm a network's performance and in many cases even improve it. The list is long and includes eliminating pathway overcongestion, proper selection of cable baskets or trays, proper routing of cables either overhead or under a raised floor, and management of cables and patch cords within the rack or enclosure itself. (Editor's note: The quotes from Legrand Ortronics' Lylette Macdonald that appear in this article were taken from a presentation she made during a webcast seminar entitled "Can cabling really be green?" The webcast can be seen and heard in its entirety at www.cablinginstall.com).
Network equipment considerations
Macdonald pointed out that less-than-ideal management approaches can lead to premature port failure on network equipment: "Poor patch-cord management has proven to limit cool air getting to outside ports on networking equipment, which causes early failure." She also noted that every incident of a patch cord being pulled hard to the side of a vertical manager, sometimes to stretch a cord to or beyond its maximum length, can damage the port within the equipment. Having to repair or replace the port because of poor cooling or poor patch-cord management is a potentially expensive risk.
Another enclosure-centric management step that can be helpful relates to switches that incorporate side-to-side rather than front-to-back airflow. Cisco's Nexus 7018 switch is a prime example. Macdonald explained, "You can introduce air-handling baffles that allow the support of passive airflow with side-breathing equipment. Making sure the hot air is routed away from the adjacent switch is very critical to managing the ambient temperature in the data center space."
Some of Legrand's enclosures are built with vertical cable-management pathways for exactly this purpose. Recently, Chatsworth Products (www.chatsworth.com) introduced a version of its N-Series TeraFrame network cabinet engineered specifically for the Nexus 7018. This particular version of the N-Series TeraFrame is a wide-framed enclosure that the company says supplies high-density thermal management and physical support. The enclosure includes an integrated network switch exhaust duct that captures and guides hot exhaust from the switch's side out the cabinet's rear. By doing so, the enclosure converts the switch's side-to-side airflow into a front-and-rear airflow pattern. This capability enables the Nexus 7018 to be used in a hot-aisle/cold-aisle configuration.
Overall the N-Series TeraFrame enclosures are designed to both manage airflow and manage the cabling that reside within them.
Macdonald said it is important for professionals designing data center pathways today to look forward, toward what those pathways will have to support in the future. "You're looking at applications 10 to 15 years down the road," that these pathways will need to support, she asserted. "Today we're designing pathways to support Category 6, 100Base-T, 1000Base-T and now 10-Gig. As we get into 10-Gig, unified computing and 40-Gig applications, we need to be shifting between copper and optical fiber. And we're looking at greater density for patch cords."
She provided a forward-looking point to consider for those designing systems today. "Looking at designs that get the layer-one interface between cabling and connectivity out of the active networking and server cabinets allows users to maximize that space for proper and effective cooling rather than introducing passive components that can block proper airflow." In other words, consider pathways that allow connectivity to reside outside the enclosure.
Patrick McLaughlin is chief editor of Cabling Installation & Maintenance.