Dealing with heat close to its source takes several forms.
by Patrick McLaughlin
The vexing problem of heat generation in the data center and, more specifically, within racks and enclosures is top-of-mind and top-of-agenda for professionals who manage such high-density computing environments.
“The IT [information technology] world knows the two critical concerns in the data center are power availability and cooling capacity,” says Herb Villa, technical manager with Rittal Corp. (www.rittal-corp.com). The two are interrelated in that the equipment required to cool these spaces consume significant amounts of power.
Dense servers known as blades are the primary sources of this concentrated heat. Quite often, blade servers are stored in enclosures or cabinets. Over the past several years, and with increasing frequency more recently, enclosure manufacturers are designing their wares to deal with concentrated heat loads in some form or fashion.
Villa notes that Rittal offers a full breadth of products with thermal-management capabilities: “You have to be able to support customers who will put a few servers in a cabinet in a data room, as well as high-end users.” The products and technologies designed to support low and medium heat-load densities are pretty well established, he says, and include server cabinets with characteristics that include perforated doors and mounting rails. Typically, enclosures of this sort can accommodate 8 to 10 kilowatts of heat per cabinet.
The technology that has gotten Rittal the most press the past few years, however, has been its closed-loop close-coupled solution. This proprietary liquid-cooled solution uses water rather than air as the cooling agent. “Everyone understands the physics of water versus air,” Villa states. “We know that refrigerants, such as water and gas, have much greater heat-transfer and -carrying capacities than traditional air.”
Villa further explains the liquid cooling package’s characterization as a closed-loop, close-coupled system: “‘Close-coupled’ means the heat transfer and removal process is adjacent to the heat-producing component. ‘Closed-loop’ means air in the enclosure is circulated through the enclosure,” via fans.
Four or five years ago, the prospect of introducing chilled liquid so close to a data center’s central nervous system made many feel, well, nervous. “Customers would say, ‘I’m not bringing water into my data center,’” Villa recalls. “Yet they already had water in their data center in many other aspects,” including sprinkler systems.
Happily, he reports, “We have overcome the water bigotry. People realize they need to consider these solutions for high-density cooling.” Whereas a half-decade ago, Villa’s potential customers would ask him why they’d ever put water into their enclosures, today, they’re asking other questions. Namely, he says, “How do we get it installed, and what is the total cost of ownership/return on investment? (TCO/ROI)”
Villa reports that Rittal has completed one study and is working on another that can, in fact, quantify TCO and ROI. Savings can be realized, Villa explains, by reducing the number of enclosures needed to house servers, thereby saving floorspace. Additionally, the water-cooled system lets users turn off and/or not have to purchase computer room air conditioning units (CRACUs), which can, above and beyond saving the costs of these units, entitle some data centers to rebates from their electric utilities.
A versatile approach
Panduit (www.panduit.com) recently introduced the Net-Access Server Cabinet, adding it to a product line that already included the Net-Access Switch Cabinet. While the two systems’ dimensions are the same, the components within them make them appropriate for housing either switches or servers.
Panduit’s business development manager, Charles Newcomb, explains: “Thermal management is very different for each application. In general, switches breathe side-to-side while servers breathe front-to-back. Cable management is key in these environments, and it is important to have a cabinet that allows you to properly route cables-to route them away from switch exhaust and intake.”
Newcomb says that switches “typically do not comply with hot-aisle/cold-aisle designs, so the Net-Access Switch Cabinet was specifically designed to provide large pathways for cable routing and airflow. To optimize switch performance, exhaust ducting direct hot air from the switch out of the cabinet to the hot aisle.” He adds, “A server application is very different. It is important to block airflow between the cold aisle and hot aisle. Blanking panels ensure air cannot pass from cold aisle to hot aisle.”
Cable management is important here too, Newcomb says, but it presents a different set of challenges. “A server cabinet contains lots of cables, but of different types-power, as well as copper and fiber communications cables,” he notes. “It is critical to provide cable pathways to route cables away from server fans and remove airflow blockages behind servers. Those blockages trap air inside, and force the server to operate at a higher temperature. A way to eliminate that is to allow the flexibility to mount patch panels vertically. The width of the Net-Access Server Cabinet allows patch panels to be mounted outside the traditional 19-inch area.”
Marc Naese, solutions development manager with Panduit, adds, “The lifecycle of equipment in a data center can be two to three years; the lifecycle of a cabinet is much longer. Users expect their cabinets to be able to grow with the infrastructure, and serve them every time they change out equipment.”
Consequently, notes Naese, “most data center managers are moving away from 24-inch cabinets and going to a wider cabinet, where they can efficiently manage more equipment.”
Sealing off access
The merits of isolation between hot air and cold air are also evident in the TeraFrame series of enclosures from Chatsworth Products Inc. (CPI; www.chatsworth.com). Ian Seaton, the company’s technology marketing manager, reflects, “Everything that’s done in the data center, when you follow industry understanding of best practices, is really designed to separate the supply air from the return air as much as possible. That’s why you want to seal off access cutouts, floor tiles, blanking tiles, and locate your cooling units in hot aisles so you prevent your return air path from migrating into the cold-aisle space.”
Seaton continues, “All those design considerations and best practices that we define as ‘best practices’ are geared to accomplish as much separation as possible to keep supply air, as delivered to equipment, from exceeding the clinical definition of what a hotspot would be.”
Chatsworth’s solutions continue on those best practices, Seaton says, but take them to an extreme. The company’s passive cooling system represents one of three approaches to hot air/cold air isolation: “You can build a room around a cold aisle and keep it separate from the rest of the data center so your return air is in free space and returns to CRACU, without mixing with the cold aisle. Or, you can build a room around a hot aisle, and duct the air out of the room directly back to the CRACU, thereby keeping separation. Or, you can use a solution like ours, taking advantage of your suspended ceiling plenum space and accomplish the same thing.”
The CPI system employs several accessories to accomplish hot-air/cold-air isolation, including raised-floor grommets, snap-in filler panels, mounting rails to seal the front of the cabinet from the back, and a vertical exhaust-duct system.
These three manufacturers offer a sampling of the many enclosure products available with some form of thermal management built into them. Importantly, each vendor says their individual systems are not standalone solutions to the heating/cooling issues affecting data centers.
Next month, we will have further commentary from these and other industry experts on the topic of taking a holistic approach to thermal management.
PATRICK McLAUGHLIN is chief editor of Cabling Installation & Maintenance.