Feeling hot, hot, hot …and not liking it

Sept. 1, 2007
While heat has always been an issue in data centers, density makes it a critical concern today.

While heat has always been an issue in data centers, density makes it a critical concern today.

by Patrick McLaughlin

Since January, Cabling Installation & Maintenance has dedicated multiple pages in each issue to one or another aspect of data center management. Over the course of those nine months, I have commented privately (and only half-jokingly) on several occasions that when data centers is the subject of a conversation, it does not really matter what topic(s) are discussed initially; sooner or later, the conversation will funnel down to the topic of heat, heat generation, and heat dissipation.

This photo exemplifies how densely servers can be deployed in an open-frame rack. The computing power within a rack full of blade servers generates significant heat in a concentrated area, significantly challenging data center managers to mitigate that heat.

Click here to enlarge image

It is a fact of life for data center managers, and some might argue it is the fact of their professional lives.

Blasts from the past

Consider these quotes from our first eight months of coverage devoted to data centers:

  • In January, Corning Cable Systems’ (www. corningcablesystems.com) data center specialist Alan Ugolini commented, “If you stacked up the issues that are important in a data center, cooling would be on top.”
  • The next month, when the official topic under discussion was structured cabling systems’ role in data center management, John Schmidt, senior product manager with ADC (www.adc.com), noted, “Proper cable management and routing are essential, and have an impact on cooling and thermal management.”
  • In March, we delved into the question of whether cable-conveyance systems should be run overhead or under raised floors within data centers. Cable Management Solutions’ (www.snaketray.com) president Roger Jette stated off the top, “One of the driving factors has little to do with cable, but rather with thermal management. Some data center managers want to keep the space underneath the floor reserved for massive amounts of air to help cool the data center. They run the cable overhead so nothing but air can run under the floor.”

The next several months followed suit. Several data-center-related topics mentioned, if they did not explicitly revolve around, ridding the data center of massive amounts of heat. In this issue we are-surprise!-kicking off four consecutive months of coverage on how to effectively cool data centers.

Thorough answers are not easy to come by, nor are the available solutions simple. If they were simple and plentiful, trade-media coverage would not reach the fever pitch that it has. And the Environmental Protection Agency (EPA) would not have researched and recently issued a report on the enormous amounts of energy that are being spent to operate and cool data centers.

This month, we will take a step back from the fray and observe why the problem exists, and some of the current thinking about solutions. Next month, we will look closely at some of the technological developments undertaken by enclosure makers so those products can help rather than hinder cooling efforts. And in the final two months of the year, we will examine some holistic approaches to heat, cooling, and energy conservation that will go far beyond the realm of structured cabling systems. Those articles will also examine the recent EPA report on data center energy consumption and proposals for greater efficiency.

Here and now

“Heat has always been a big issue in data centers, with integrated circuits providing computing power and producing heat,” explains Arnie Evdokimo, president of data-center design and heating/ventilating/air-conditioning (HVAC) firm DPAir (www.dpair.com). While the phenomenon has been going on for years-decades even-he says the situation is coming to a crescendo: “The biggest problem today is the density of heat. Blade servers really are a culprit, and it’s going to get worse before it gets better.”

Many have pointed to the blade server as the quintessential source of dense heat. The compaction of such significant amounts of computing power into a small footprint lets data center managers save real estate because so much computing can take place in so little space. A byproduct of the energy required to generate that computing power is heat, as Evdokimo pointed out. With the density of today’s blade servers, racks/enclosures today can house 4 to 5 kW of power usage. Some predictions hold that in just a few years, a single rack will hold 47 kW.

“The kilowatt density has changed a lot,” Evdokimo says, adding that as a designer of HVAC systems for data centers, he “must provide more tonnage [or cooling] in that area. Cooling systems are now occupying more space, sometimes including the entire perimeter of a data center.”

Echoing Roger Jette’s remarks from earlier, Evdokimo notes that raised-floor heights have evolved because deeper floors allow more air to run over chillers: “Some older centers have shallow floors, and many of those floors are covered in cable, which decreases cooling efficiency.”

Overall, he says, there are two primary tips to be employed by those with the luxury (or responsibility) of designing a data center from scratch:

1. A deep floor with no restrictions. “Use a 4-ft. raised floor when possible,” he says. “Make sure it is above water-level grade and include subfloor drains. Provide plenty of room for cables, power equipment, and serviceability.”

2. Build a high ceiling into the data center. While a standard practice in many areas can be six feet in height, Evdokimo recommends 12- to 14-ft. ceilings. The benefit, he explains, is that “the heat can go across the room and stratify.” A downside of this approach is that the data center will have more open space to fire-protect, meaning more volume to cover with expensive fire-suppression gases. But what has held true in the general construction trade for many years This photo exemplifies how densely servers can be deployed in an open-frame rack. The computing power within a rack full of blade servers generates significant heat in a concentrated area, significantly challenging data center managers to mitigate that heat.

On page 47 of this month’s issue, you will find an article authored by Ian Seaton of Chatsworth Products Inc. (www.chatsworth.com) that describes a philosophy the company embraces and around which it has developed products: passive cooling. Chatsworth espouses that complete separation of return air from supply air can sharply reduce the burden placed on data center HVAC systems.

Seaton comments, “Given current circumstances, people are going to start facing these issues and specifying this practice [complete separation of return and supply air] into design specifications. When you can use free air, and maintain complete isolation between supply air and return air, you can start delivering air in the mid-70s.”

Peeling back the layers

While the cause of near-crisis heat generation in today’s data centers is fairly straightforward, the means by which to address the problem is many-layered and has not come close to achieving consensus among industry thinkers. In the coming months, we will peel back some of those layers and provide practical information for the many data center managers who must make decisions today about their facilities’ futures.

PATRICK McLAUGHLIN is chief editor of Cabling Installation & Maintenance.

Sponsored Recommendations

imVision® - Industry's Leading Automated Infrastructure Management (AIM) Solution

May 29, 2024
It's hard to manage what you can't see. Read more about how you can get visiability into your connected environment.

Adapt to higher fiber counts

May 29, 2024
Learn more on how new innovations help Data Centers adapt to higher fiber counts.

Going the Distance with Copper

May 29, 2024
CommScopes newest SYSTIMAX 2.0 copper solution is ready to run the distanceand then some.