Where to place your cooling units

April 1, 2013
There is an awful lot of heat to remove from the racks in your data center. And with every technology refresh, there is ever-more heat.

From the April, 2013 Issue of Cabling Installation & Maintenance Magazine

Options abound and the stakes are high, so choosing where to put cooling units in a data center is of paramount concern.

by Mark Hirst, Cannon Technologies

There is an awful lot of heat to remove from the racks in your data center. And with every technology refresh, there is ever-more heat. Per-rack dissipation has gone from 1 to 4 or 5 kW, with some facilities at 12 to 15 kW and 60 kW possible. But you need to juggle space, cooling efficiency and a load of other factors. So where should you put your chillers--within row, within rack, at the top, bottom or side?

An important but often-overlooked step in hot- or cold-aisle containment is the inclusion of blanking plates, which prevents hot air from flowing back into the cold aisle.

The days when racks in the data center drew a paltry 2 kW are, for many organizations, long gone. Virtualization, blade servers, high performance computing (HPC) and massive storage arrays mean that for many, the average power draw per rack is somewhere between 15 kW and 45 kW. In highly dense environments that is expected to rise to 60 kW. All that power creates a huge amount of heat that must be dissipated.

Cooperation is key

Before beginning to address this level of load, the relationship between the information technology (IT) teams and the facilities-management teams needs to change. Data centers need to become carefully orchestrated environments in which any change is properly considered for its impact on cooling as well as its impact on IT. As load increases in the rack, it has to be planned for with cooling.

Start with these three actions.

  • Use thermal imaging to take snapshots over time to see where heat is accruing. Ideally, this should tie back to workload peaks to get maximum heat.
  • Computational fluid dynamic models of the data center provide a detailed view of how air is moving and where heat needs to be managed.
  • Use in-rack sensors to detect changes to micro climates in racks.

Without these measures to gather data on heat, it is extremely difficult to know what needs to be dissipated and where.


There are many ways to reduce the heat in the data center, including the following.

  • Successive generations of technology promise to create no more heat than the previous generation. This is now becoming part of new-order specifications.
  • Raising the input temperature to the hardware has had a significant impact reducing the cost of cooling and allowing for better placement of cooling units. The ASHRAE guidelines suggest 27 degrees C as a sustainable temperature, which is acceptable to vendors without risking warranties.
  • In an aisle-containment environment like this one, computer room air conditioning units must be perpendicular to the hot aisle. And careful monitoring of airflow is important to ensure that heat is evenly removed from the aisle, to avoid hot spots.
  • Liquid cooling for extreme heat environments is expensive but can solve a problem when the heat is confined to just a few systems.
  • Traditional hot/cold aisle containment systems work well if the heat is being effectively extracted. However, one common failing is the lack of proper blanking plates to prevent hot air flowing back into the cold aisle.
  • Chimney venting of hot air to secondary cooling or even out of the building is becoming popular with some free-air environments as it prevents input and output air from mixing.
  • Computer room air conditioning (CRAC) units are the traditional way of cooling air before injecting it back into the data center. It works well if properly placed, but issues such as vortices and blocking must be architected out before placement. It tends to be inflexible, as once installed they cannot be moved easily.
  • Low-, medium- and high-power and heat zones allow cooling to be more effectively targeted and reduce the risk of hot spots that cannot be contained.
  • Placement

    Once the technology has been chosen, the next step is deciding where to place it. Despite the rise of free air cooling and liquid cooling solutions, CRAC units are still the most common way of cooling a data center. As mentioned, however, once installed CRAC units are inflexible. So what should you consider when deciding on placement?

    Edge of room at right angles to the hardware is no longer acceptable. It creates problems such as vortices, where the air gets trapped between equipment. Airflow is affected by placement of racks, and this encourages hot spots to occur. These hot spots then require secondary cooling to be installed.

    With hot/cold aisle containment, CRAC units need to be perpendicular to the hot aisle; careful monitoring of airflow is important to ensure that heat is evenly removed from the aisle. Otherwise, hot spots will still occur.

    Within row cooling (WIRC) helps to get the cooling to where it is most needed. As racks get denser and heat climbs, WIRC allows for cooling to be ramped up and down right at the source of the problem. This helps keep an even temperature across the hardware and balance costs against workload.

    If the problem is not in multiple aisles, but rather within just a single row of racks, use open-door containment with WIRC. This is an approach in which the doors between racks are open, allowing air to flow across the racks but not back out into the aisle. Place the cooling units in the middle of the row, then graduate the equipment in the racks with the highest heat closest to the WIRC.

    For blade servers and HPC, consider in-rack cooling. This solution works best where there are workload optimization tools that provide accurate data about increases in power load, so that as the power load rises, the cooling can be increased synchronously.

    When placing any cooling solution, best practice is to keep the airflow path as short as possible. This increases the predictability of airflow and has a significant improvement on the efficiency of the solution. ::

    Mark Hirst is T4 product manager at Cannon Technologies (www.cannontech.co.uk).

    More CIM Articles
    View Archived CIM Issues

    Sponsored Recommendations

    Power up your system integration with Pulse Power - the game-changing power delivery system

    May 10, 2023
    Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

    The Agile and Efficient Digital Building

    May 9, 2023
    This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

    Network Monitoring- Why Tap Modules?

    May 1, 2023
    EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...

    400G in the Data Center

    Aug. 3, 2022