Rack thermal management: Above, within, and below

Sept. 1, 2008
Best practices and emerging technologies aim to reduce or eliminate hot spots.

Best practices and emerging technologies aim to reduce or eliminate hot spots.

As the cabling industry pays ever-increasing attention to energy efficiency and "green" initiatives, it focuses specifically on data center environments because these facilities consume humongous amounts of power (see Special Report beginning on page 25). The cycle of powering networking devices, which generate significant amounts of heat, and powering the cooling equipment needed to alleviate that heat, earns data centers the reputation as power-hungry beasts. And while communications cabling systems play just a small role in the creation of alleviation of energy consumption, data center managers have the opportunity to use these cabling systems to help rather than hurt the cause.

Although it is recommended to run communications cables overhead rather than underfloor in data centers, a best practice for underfloor deployment is to run communications cables underneath hot aisles, and power cables under cold aisles.
Click here to enlarge image

Currently, many are steering cabling installers and managers to place communications cable overhead rather than beneath a raised floor to eliminate the possibility of cable dams that impede the flow of air in underfloor systems. "Some data center managers want to keep the space underneaththe floor reserved for massive amounts ofair to help cool the data center," explainsRoger Jette, president of Snake Tray (www.snaketray.com). "They run the cable overhead so nothing but air can run under the floor."

In many instances, even if communications cabling is run overhead, electrical wiring runs underneath the floor. Plus, plumbing for air-conditioning systems will reside under the floor as well. So, placing communications cabling overhead may significantly reduce clutter underfloor, but will not eliminate it.

Keep them separated

The key to realizing efficiency with data-center cooling systems is to separate the cool air from the return hot air. If the cool air intermingles with hot air before it is fed into a rack space, its effectiveness is significantly compromised, the equipment does not cool as much as it would if it received unmixed cool air, and the air-conditioning system continues to operate to provide more cool air.

That separation of hot air from cool air has been the basis for a long list of best-practice recommendations, as well as technologies and products designed for the data center. Probably the most-familiar iteration of this concept may also be the original—the hot-aisle/cold-aisle rack configuration established by The Uptime Institute's Dr. Robert (Bob) Sullivan (www.upsite.com), which has been illustrated and put into practice around the world and appears in the Telecommunications Industry Association's 942 standard for telecommunications systems in data centers.

The hot-aisle/cold-aisle setup is the basic, fundamental "must-have" arrangement for any data center concerned with thermal management. But increased equipment densities and higher-heat-generating networking gear have forced data center managers to find additional means of segregating hot air from cool air. Vendors serving the data center industry have developed systems for several physical locations—overhead,in-rack, on-floor, under-floor—all aimed at maximizingefficiency be either keeping hot air and cool air apart orlocally cooling hotspots.

In-row cooling has gained traction as a concept over the past couple years, with several vendors offering products that go either on top of or beside a rack/enclosure for localized air cooling. A benefit of deploying this technology is that it does not rely on any underfloor airflow for cooling; in theory, if all other pieces fall into place, an in-row cooling system can allow data centermanagers to avoid the underfloor system altogether.

Closely coupled cooling

Other technologies with similar objectives sit atop equipment enclosures, cooling the hotspots in their immediate vicinity.

Every manufacturer of equipment enclosures is addressing thermal management, from the relatively simple mounted fans to systems that are much more elaborate. For example, Rittal (www.rittal-corp.com) has for several years carried the flag for the concept of liquid cooling—running lines of chilled water into the cabinet for the purposes of cooling that space. No one can argue with the physics: Water is a far more effective cooling element than air.

Rittal offers a "closed-loop close-coupled solution," as described by technical manager Herb Villa. The system is close-coupled in that the heat-removal process is near the heat-producing element; it is closed-loop in that the air within the enclosure is circulated through the enclosure via the use of fans. Villa reports that many users say they can completely turn off multiple computer-room air-conditioning units (CRACUs) making the case for the system's energy efficiency.

Other enclosure manufacturers have gone in different directions toward the same destination. Chatsworth Products Inc. (www.chatsworth.com) also invokes the laws of physics, in this case with relation to airflow, in the passive cooling system it builds into some of its products. Chatsworth's TeraFrame system is available with a vertical exhaust duct that allows hot air to naturally flow upward, through the duct, and into the facility's air-handling plenum space.

As with Rittal's water-cooled system, one of the key efficiency gains of Chatsworth's system is that fewer CRACUs need to be deployed to cool a data center's network equipment. Chatsworth's Ian Seaton observes, "Everything that's done in the data center, when you follow industry understanding of best practices, is really designed to separate the supply air from the return air as much as possible. That's why Dr. Bob Sullivan came up with the concept of hot aisles and cold aisles way back in the beginning."

Seaton adds, "It's why you want to seal off access cutouts, floor tiles, blank tiles, and locate your cooling units in the hot aisles so you get your return air path not migrating into the cold aisle space. All those design considerations and behaviors that we define as ‘best practices' are all geared to accomplish as much separation as possible between supply and return to keep supply air, as delivered to equipment, from exceeding the clinical definition of what a hotspot would be."

Under the surface

Seaton's comments about sealing off cutouts and floor tiles opens the discussion regarding measures that can be taken in the space underneath a rack or enclosure. Several products, including Snake Tray's Snake Air, Upsite Technologies' KoldLok grommet, and PDU Cables' Air Guard (www.pducables.com), are designed to come as close as possible to eliminating the leakage of cool air through openings in raised-floor tiles. Grommet-style products are now commonly found in places where cables come up through the raised floor and feed into communications equipment.

Earlier this year, AdaptivCool—a division of Degree Controls (www.degreec.com)—introduced a product called HotSpotr, which can be deployed as a standalone device or as part of alarger-scale thermal-management system called Room Scale Intelligent Cooling. HotSpotr works by supplying cool air preferentially to a data center's hot racks and exhausting the hot air directly to the CRACU that can best handle the heating load.

The Room Scale Intelligent Cooling program includes multiple HotSpotrs networked together. Temperature sensors placed at the rack's air intake send data to a management system that calculates minute-by-minute cooling demand and controls airflow. (A video interview with AdaptivCool technical staff and a demonstration of the HotsSpotr product are available atwww.cablinginstall.com).

If you look above, within, or beneath a rack or enclosure in a data center, you are likely to find some technologydeveloped specifically to enhance thermal management and energy efficiency by reducing or eliminating thecommingling of hot and cool air.

PATRICK McLAUGHLIN is chief editor of Cabling Installation & Maintenance.

Sponsored Recommendations

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

Revolutionize Your Network with Propel Fiber Modules

Oct. 24, 2023
Propel Fiber Modules are your gateway to the future of connectivity.