Addressing misconceptions about data center cooling techniques

Dec. 1, 2010
It is time to finally get over the misconceptions regarding the supposedly superior efficiencies of close-coupled and liquid cooled server cabinet solutions...

Ducted cabinet solutions are not equal to close-coupled and liquid-cooled systems. They are superior.

by Ian Seaton, Chatsworth Products Inc.

It is time to finally get over the misconceptions regarding the supposedly superior efficiencies of close-coupled and liquid cooled server cabinet solutions, as well as the supposed limits on power densities that can be effectively cooled by air. Passive air cooling can successfully dissipate as much heat as can be generated by commercial servers that can actually fit in a 42-45U cabinet, and in fact more than most close-coupled and liquid-cooled solutions. In addition, air-cooled solutions that rely on complete isolation between supply air and return air are more efficient than most close-coupled systems and provide access to lower data center cooling costs.

Meeting the high-density challenge

The old conventional wisdom regarding some ceiling on how much heat can be effectively cooled by air is based on an accurate assessment of the innate inefficiencies of standard hot-aisle/cold-aisle data centers and a poor understanding of the natural forces that can be harnessed to remove heat from a cabinet. That conventional wisdom has typically placed the ceiling at somewhere around 6 kW per cabinet. In addition, that conventional wisdom was based on an accurate understanding of the fixed relationship between airflow, heat dissipation and temperature rise (CFM = 3.1W/ΔT) in the context of a standard hot-aisle/cold-aisle data center where there is a dependency relationship between the volume of cold air delivered through a proximate perforated floor tile and a cabinet server heat load, plus the presence of heated server exhaust air (return air) in the room. The practical limits to how much air you can push through a perforated floor tile and the looming presence of that heated return air conspire to limit how much can be cooled in any particular cabinet.

Removing these constraints opens the door to higher power/heat densities that could potentially be cooled by air. In a data center where the cabinets function as a complete isolation barrier between supply air and return air, these constraints are removed. This isolation is accomplished by a combination of accessories such as blanking filler panels, equipment mounting area perimeter sealing air dams and floor-tile cutout brush seal grommets, along with a system to remove the return air from the room into a suspended ceiling return air space. Such a system would include a solid rear cabinet door with gasket seal and a vertical exhaust duct running between the cabinet and the suspended ceiling. In this construction, there is no longer any dependency on how much air can be pushed through a single perforated floor tile and there is no heated air anywhere in the room to prevent the use of cold air delivered everywhere into that room. As heat densities increase beyond the level that could practically be achieved by underfloor air delivery without creating extra-wide cold aisles, then the access floor can be eliminated altogether and the room can be flooded with high volumes of cold air through wall grates or from overhead.

With the availability of a theoretically infinite cooling capacity, the remaining constraint revolves around effectively removing the heat from the cabinet. While it is not particularly prudent to reveal all the intellectual property in this solution until all the patents are finally issued, suffice to say that our trademarked Passive Cooling Solutions involve specific geometries that exploit the forces of the Bernoulli Principle (inverse relationship between velocity and pressure) to create forces within the cabinet to remove high volumes of heated air. These cabinets have been deployed in large numbers in working data centers with actual, measured heat loads in excess of 30 kW per cabinet. Testing has exceeded 30 kW per cabinet and there is currently a customer project where cabinets were being shipped into a space where they will be loaded with 30 kW of actual, measured server heat load. Further testing is planned to extend beyond these levels.

Cost efficiencies

More important than power density, however, is the customer's costs for cooling these densities, and there is no more efficient, lower-cost way of cooling a data center than with the above-described ducting system. The two basic factors affecting this higher efficiency are 100 percent utilization of HVAC output and an access path to more hours of economization cooling.

With the ducted exhaust system, every bit of cold air produced by the HVAC system has to go through a server. The only path between supply air and return air is one of heat transfer through a server, so there is no waste. There is no bypass and no need for the overprovisioning that is required in standard hot-aisle/cold-aisle data centers. The efficiency claims of close-coupled cooling and liquid-cooled solutions are based on comparisons to inherently inefficient data centers with a high amount of bypass air, and overprovisioning of cooling necessitated by the extreme variations in pressure and airflow throughout a room. Because of the traditional dependencies on air delivered through proximate perforated access floor tiles, cooling capacity provisioning formulas have had to plan for providing adequate air to the lowest airflow spot in a room, resulting in typically 200 to 300 percent overprovisioning and huge amounts of wasted bypass air. When it no longer matters where the cold air is delivered and when 100 percent of it must travel through a server, there is no longer a need for that overprovisioning and therefore there is no more waste.

While that elimination of waste may sound like it puts this ducted cabinet solution on an even playing field with the close-coupled and liquid-cooled solutions, it actually puts the ducted approach at an efficiency advantage because of the higher efficiencies of the larger fans and motors moving air at the room level. The coefficient of performance (COP) for the larger motors and fans of the larger free-standing computer room air conditioner (CRAC) or computer room air handler (CRAH) units is always going to be higher than the smaller motors and fans of the close-coupled solutions and the COP of the extremely large water-cooled central air handlers is always going to be superior to anything small enough to fit on the data center floor. This has always been the case, but because of the historical waste of standard hot-aisle/cold-aisle data centers, this COP advantage never translated into comparable efficiencies on the data center floor. With the ducted cabinet system that results in 100 percent utilization of the HVAC output, now these superior COP values can be translated directly into data center cooling efficiency.

A further efficiency benefit for the ducted solution arises from chiller-plant efficiencies and access to more hours of "free" cooling. In a ducted cabinet data center, there is only one temperature in the data center, so there is no longer a need to control return air temperature in the typical area of 72 to 73 degrees Fahrenheit with a supply temperature in the mid- to low-50s. If the standards allow for 77-degree server inlet temperatures, then the room can be cooled to 77 degrees with a 77-degree supply-air temperature. The higher supply temperatures dramatically improve chiller plant efficiency as well as opening the door to significantly more hours of economization cooling. The table within this article details these supply temperatures.

With the higher supply temperatures allowed by ducted cabinets, the data center operator first gets the benefits of improved chiller plant efficiency: approximately 1 to 1.5 percent reduction in chiller plant energy use for every 1 degree increase in the chilled water loop temperature, or 25 to 35 percent energy use reduction. In addition, these temperatures allow for many more hours of economization cooling–all hours under 60 degrees F for water-side economization; all hours under 72 degrees F for KyotoCooling heat recovery wheel economization; and all hours under 77 degrees F wet bulb for evaporative air-side economization. In a data center with a 2.0 Power Usage Effectiveness, water-side economization and evaporative air-side can reduce the total data center energy use by 35 percent during those economization hours, and by 43 percent during KyotoCooling hours.

Ducted cabinet return air isolation raises the bar by a factor of 4 or 5 for heat densities and provides much more efficient cooling at a lower initial investment than close-coupled or liquid-cooled solutions. While a completely contained close-coupled system could provide access to similar water-side economization hours and chiller-plant efficiencies, these solutions will by definition have poorer coefficient of performance ratings and will not be able to provide access to the most economical economization of KyotoCooling nor to the higher number of economization hours of evaporative air-side economization.

Ian Seaton is technical applications development manager with Chatsworth Products Inc. (www.chatsworth.com). This article is derived from a white paper he authored entitled "The biggest opportunity for data center energy savings."

More CIM Articles
Past CIM Issues

Typical Data Center Ranges

CRAC with economizer (TIA 942 Best Practices)

CRAC with Economizer 100% Isolation

Heat Recover Wheel 100% Isolation

Evaporative Air Economizer 100% Isolation

Delivered

60-85° F

68-77° F

77° F

77° F

77° F

Supply

52-55° F

52-55

77° F

77° F

77° F

Water

42° F

42° F

65° F

N/A

65° F

Ambient

37° F

37° F

60° F

72° F

77° F

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...