It might not be the proverbial biggest fish, but the routing and maintenance of cables is a fish nonetheless in the ecosystem of data center energy efficiency.
By Patrick McLaughlin
In 2015 it has become somewhat cliché to refer to a data center as an ecosystem. But as with many clichés, it got to be one because of the statement's fundamental truth. "Ecosystem" is an appropriate term for a data center's network and facilities systems because they all are interdependent at least to some extent. And a change, whether it is an improvement or an inefficiency, in one system very likely will affect multiple others. In that vein, the management of a data center network's physical-layer cabling can and often does have an effect on the flow of cooling air in the facility. If cable management serves to improve airflow, the entire ecosystem--including the all-important cooling of network equipment--also improves. If cable management inhibits airflow, the opposite becomes true and the cabling then becomes an inefficiency in the ecosystem.
In perspective, cabling is by no means the proverbial "biggest fish" when it comes to data center network operations, their impact on airflow, and the consequent results related to energy efficiency. But it is a fish nonetheless. Some of that perspective recently was provided by Ian Seaton, a critical facilities consultant who was a long-time technical staff member with Chatsworth Products Inc. (CPI; www.chatsworth.com). Seaton now provides consulting services for firms including CPI, Upsite Technologies (www.upsite.com) and others. In November Seaton delivered a presentation during a webinar hosted by Cabling Installation & Maintenance. His presentation, titled "Achieving effective airflow management in challenging networks," addressed cabling-related issues including cable distribution and management. The sheer number of cables used with some of today's large switches make the practicality of cable management a significant challenge. Challenging as it may be though, one conclusion Seaton drew was that, "Good cable management practices enhance airflow management strategies."
Cabinets grow up and out
The fact that massive numbers of cabling need to be managed in data centers is not breaking news. Nearly three years ago analysis by what was then called IMS Research (which has since been acquired by IHS Research) examined drivers that have caused an increase in the market for taller-than-42-U enclosures within data centers. Liz Cruz, a senior analyst for data centers, cloud and IT infrastructure with IHS, conducted the research and issued a report in spring 2012. At that time she cited "increasing server depths, more cabling within cabinets, the need for airflow management and the desire to maximize floor space within data centers" as primary drivers of taller cabinets.
|The amount of cabling used with a large network switch makes the management of that cabling a significant challenge. Photo: Chatsworth Products Inc.|
Back in 2012 Cruz forecasted shipments of 48U cabinets to grow an average of 15 percent annually over the following five years, with 42U-rack shipments growing at 5 percent. And while cabinets were predicted to get taller, they also were predicted to get wider. The analyst also explained in 2012 that the standard cabinet width was 600 mm but "going forward, shipments of 750- to 800-mm-wide cabinets will grow at nearly twice the rate of 600-mm cabinets. In terms of depth, the 1100-mm category currently accounts for the greatest share, but 1200-mm will grow faster than any other depth in percentage terms."
Cabling was one of several factors influencing this anticipated change. The analyst explained that greater computing densities at the rack level were primary causes. These densities result in more cabling within cabinets and more heat generated in them as well. Those two realities were driving up the cabinets' width and depth, to accommodate cable management and airflow. "Growth in power densities are not expected to level out anytime in the near future, which means neither will enclosure sizes," Cruz stated then.
Dos and don'ts
Lars Strong, P.E., a senior engineer with Upsite Technologies, wrote in December 2014 about the impact of taller racks on data centers and airflow management in particular. Citing the logistical limitations that are likely to keep rack heights at around 48U instead of 51 or 52 U in many cases, Strong elaborated, "On top of the challenges that are accompanied with installing taller racks, cable management also becomes a significant problem. The taller the racks, the more servers can be deployed, and the more cables you have. Cable management must be done well and kept tight, and to the sides of enclosures to allow clearance for exhaust air to freely leave the cabinet.
|When cables are fed through the top of an enclosure or cabinet, a sealing device can help preserve airflow efficiency. Shown here is Upsite Technologies' four-inch HotLok round rack-mount grommet.|
"However, even if cables are properly managed, sometimes there simply isn't enough space in the back of the cabinet. This increases the demand for wider and deeper cabinets to accommodate more cables."
That article from Strong appeared in Upsite's blog. Earlier in 2014, he wrote an article on the blog titled "10 tips to improve PUE through cable management." In that article he said, "If cables are improperly placed and block airflow, your cooling units are forced to work harder, albeit inefficiently, which negatively impacts your PUE [Power Usage Effectiveness]. How you manage your cables is an important part of your overall airflow management strategy, but one easily overlooked as the two are not often associated with each other."
He divided his 10 tips into three areas: cable management in the raised floor, cable management in the rack, and cable management overhead. Strong characterized each tip as a "do" or "don't."
For underfloor cable management, he advised, "Do place cable trays under cabinets or hot aisles. This allows the raised floor space under the perforated tiles in the cold aisle to remain free. Do place cable management trays as high as possible, allowing air to flow underneath them. This is particularly important when running cable trays close to or in front of cooling units where most of the airflow movement is close to the floor … Do place cable trays at a consistent height as much as possible. This allows conditioned air to flow in a straight path. Don't place cable management trays underneath the cold aisle. They may end up under perforated tiles."
Within the rack, Strong says, "Do use wider cabinets with cable management built into the side and not right behind the exhaust ports. Do use deeper cabinets that allow the air more room to escape vertically. Do use blanking panels. When cables increase the pressure within the cabinet, blanking panels become especially important. Don't block the exhaust from servers, particularly ones with high volume and velocity fans."
Overhead cable management has a "do" and a "don't." According to Strong, "Don't place cable management trays high above the cabinets. In rooms without a ceiling plenum return, it forces hot air returning to the unit to go under the cable trays and closer to IT intakes, which can cause hot spots. Do place cable management trays within a few inches of the top of an IT cabinet so that all exhaust air flows to the top of the room and over the top of the cable trays. This can actually improve the airflow management in the room."
In an interview with Cabling Installation & Maintenance, Strong shared that from what he sees, most often cable trays are installed without much regard to airflow management. Often they are installed as he recommends--at a decent height that is not too high in the room--but the decision to do so was done without consideration of airflow.
Strong advocates the convening of what he calls an ICE team--integrated critical environment team--to make decisions about data center and computer room spaces. "It's a concept we've shared and was coined by the Uptime Institute," he said. Members of the ICE team typically include personnel from corporate real estate, facilities, an IT executive and a data center manager from IT. "A couple people who are in the room [computer room or data center] every day, and a couple people from the C-suite who don't walk into the room very often," he said. Quite often in many organizations, Strong pointed out, conversations related to the data center focus on organization structure and considerations that would prevent problems or otherwise allow operations to flow more smoothly.
Strong pointed out that issues can arise when cabling is fed through the top of a cabinet. When that happens, hot air releases through the top of the cabinet. Two practical approaches can minimize or eliminate the effects of cables-through-top-of-cabinet. One is the use of a sealing mechanism like a grommet. The other is, when aisle containment is being used, ensure that the containment is placed at the cabinet's front edge, so the entire top of the cabinet, including the hole through which cables pass, is in the hot aisle.
Containment is one of airflow's "big fish" in the data center ecosystem. But cable management, though a small fish, remains important.
Patrick McLaughlin is our chief editor.
When cables are fed through the top of an enclosure or cabinet, a sealing device can help preserve airflow efficiency. Shown here is Upsite Technologies' four-inch HotLok round rack-mount grommet.