Hunger for power drives data center designs

June 1, 2007
Heat generation, cooling efforts, and power consumption add up to design challenges for data center administrators.

by Patrick McLaughlin

Heat generation, cooling efforts, and power consumption add up to design challenges for data center administrators.

The challenge of designing a data center that adequately supports an organization’s needs in the present as well as the future is a formidable task. As the cabling industry has witnessed over a couple of decades, the idea of building infrastructure that will support future applications is an inexact science. Data center administrators in particular face myriad unknowns when making such long-term plans.

Click here to enlarge image

For growing numbers of data center managers, the considerations behind a facility’s layout go beyond aesthetic appeal or basic functionality. And whether managers are redesigning an existing data center to gain space and use efficiencies, or plotting a brand new one to serve them for the long-term, a need that has begun to trump all others is the availability of electrical power.

In what must be one of the most exasperating challenges facing data center administrators, oftentimes a facility runs out of power before it runs out of real estate. That is, some data centers with open space are unable to add equipment to that space because they do not have sufficient power to support the operation-and, in particular, the cooling-of that equipment. Organizations preparing to build new data centers put at the top of their needs lists the availability of clean and affordable power.

“With the advent of high-density computer equipment, such as blade servers, many data centers have maxed out their power and cooling capacity,” comments Michael Bell, research vice president for analyst firm Gartner (www.gartner.com). “It’s now possible to pack racks with equipment requiring 30,000 watts per rack or more in connected load. This compares to only 2,000 to 3,000 watts per rack a few years ago.”

Insufficient capacity

Bell made those comments last fall at Gartner’s annual data center conference, when it dropped the bombshell prediction that by 2008, 50% of current data centers will have insufficient power and cooling capacity to meet the demands of its high-density equipment.

“Increased power translates into significant increases in heat gain, where the electrical cost to cool the data center can equal or exceed the power to energize the computer equipment,” Bell added.

Arnie Evdokimo, president of DPAir (www.dpair.com), has seen the problem manifest all over the United States, where his company designs and maintains data centers. “In the past couple of years, everyone has been trying to get as much computing power as possible into a single cabinet,” he notes, explaining what has set the table for the forecasted power crunch. DPAir, which has its roots in the air-conditioning industry, understands the inherent importance of having enough cooling capacity to accommodate the increasing heat loads that accompany today’s dense computing environments.

“Some predict that in a couple of years, a single rack of high-density servers will require 47 kilowatts,” of power, Evdokimo says. “If that is in fact the case, that single rack will require more than 13 tons of cooling,” which equates to more than 150,000 British Thermal Units (BTUs). He notes that in years past, which now must seem very long ago, a single 20-ton cooler would be used for an entire computer room. Now the industry faces the possibility that it will take about two-thirds of that amount to cool a lone rack of blade servers.

Reality check

How realistic is Gartner’s threat of impending power shortages for data centers? Chatsworth Products Inc.’s (CPI; www.chatsworth.com) Ian Seaton, senior applications technologist opined, “Is it real? Yes. Is it serious? Yes. But I don’t think it’s terribly pervasive-yet.” But many data centers, he added, are facing the potential for such a shortage.

Evdokimo, on the other hand, sees the situation with regularity. “What happens often is that an organization whose data center is already at an overload condition brings in more racks of servers. Sometimes, an acquisition will blindside a data center manager with additional servers and more load.”

In scenarios like a corporate acquisition, or really any in which an existing facility is growing as opposed to a new facility being built, one of the primary challenges is to perform the upgrade while the facility is live. “Many of these places have 30 or 40 million dollars invested into a facility,” says Evdokimo. “They cannot shut down.” Often, though, they can buy five to ten years worth of capacity if they upgrade their power service by adding transformers and generators.

Fragile grid

The notion of a reliable and plentiful supply of power from electric utilities is, unfortunately for most parts of the country, unrealistic. High-consuming users can contract with electric utilities for a supply of a specific amount of power; the utility must make that amount of power available whenever the consumer needs it. At times, demand outpaces capacity just as it does with other energy resources, including the one that hits close to home for everybody-petroleum. When power demand is high and supply is low, utilities will purchase power from other sources. So, it will meet the demand but, like we see at the gas pump when there is a supply shortage, consumers pay mightily for their energy. And that is when the utilities can obtain enough power to meet demand; when they cannot, they may implement rolling blackouts.

The stresses of power shortages and potential rolling blackouts affect some parts of the country, while others enjoy a more reliable power supply. Naturally, those areas without power-supply challenges are popular places for data center new-construction projects.

“There has been a building boom of large data centers in the Pacific Northwest, where power is plentiful,” says CPI’s Seaton. “The availability of power certainly affects the decision-making process.”

Indications are that more organizations will face those decisions in the near-term, as Gartner further discussed the impact of swelling power demand in data centers. According to the analyst firm, traditionally, the power required for non-information technology equipment in the data center-including cooling, fans, and pumps-represented about 60% of total annual energy consumption. As power requirements continue to grow, Gartner warns, energy costs will emerge as the second-highest operating cost in 70% of worldwide data center facilities by 2009.

Gartner also offers a positive longer-term outlook, stating that innovation is underway that will converge over the next couple years to mitigate the power/cooling issue. “Equipment manufacturers are developing more energy-efficient enclosures, processors, and cooling solutions,” said Bell. “The leading processor manufacturers are battling to produce more energy-efficient chipsets. Server manufacturers are employing more-efficient power supplies, heat sinks, and power-management systems, as well as offering a host of in-rack cooling solutions, supplemented by facility design and assessment services. We’ll see fully integrated management systems that will monitor and manage server workloads and power/cooling demand and optimize capacities in real time.”

Other efforts are taking place closer to the realm of cabling infrastructure, which also are aimed at alleviating the heat-load troubles. In particular, the designs and constructions of the racks and enclosures in which servers-and cabling-are housed are adapting to meet these challenges.

“There are significant benefits to isolating the return air from the supply air,” Seaton says. “Good isolation between supply air and return air allows higher supply-air temperatures. When you can use free air and maintain complete isolation between supply air and return air, you can start delivering air with temperatures in the 70s.”

The ability to deliver 70-something-degree air, rather than the far cooler air that is currently being pumped into data centers, will significantly reduce facilities’ air-conditioning burden, thereby reducing their demand for power. Seaton can point to scientific research to back up his statement, including work being carried out at the Lawrence Berkeley National Laboratory (www.lbl.gov). In one of Berkeley’s recent research projects, several data centers used outside air to cool their facilities, through the use of airside economizers. Berkeley reported positive results, but many administrators would approach the possibility with trepidation.

“The primary concerns about using outside air are humidity control and particulate contamination,” Seaton notes. His company, CPI, has developed a line of server enclosures incorporating passive cooling techniques that do not rely on fans or liquids, but rather are based on the fundamentals of physics as well as the placement of barriers between supply and return air.

Similarly, barriers and other airflow-affecting products are becoming increasingly important in the racks and enclosures that house switches. “The problems currently experienced in server environments are making their way to switch environments,” says Jeffery Paliga, director of global solutions development for Panduit Corp. (www.panduit.com). He recommends auditing a data center for airflow obstructions.

“The proper use of blanking panels inside enclosures, and placement of grommets on floor tiles are two best-practice and cost-effective steps that can make a difference,” in keeping switching environments cool, Paliga added.

“Cable management can affect heat dissipation as well,” he continues, “including in blade-server environments. If cables are managed improperly, and are restricting air from exiting the cabinet, it can ultimately lead to cooling issues within the cabinet.”

According to DPAir’s Evdokimo, energy efficiency in all forms is important: “We engineer every building like it’s our own, using the highest-efficiency equipment.” For those with the opportunity-or responsibility-to build anew, he advises, “Do everything you can to over-engineer and over-design your facility so that you can grow into it. Whatever you do, you cannot do this [building a data center] twice.”

Realistic expectations are important too, he adds, noting that if an organization simply never will need more than a certain amount of capacity, that should be factored in. But Evdokimo recalls, “A growing number are definitely running out of power before they are running out of floor space.”

Be prepared for anything

And, like Panduit’s Paliga, Evdokimo emphasizes the importance of self-study. “The biggest thing: Do power audits. Understand where you are in terms of power consumption. Be prepared for anything that can happen, and always understand what it will cost you if you go down.”

When Gartner made the prediction that half of today’s data centers will run out of power by 2008, it also provided guidance on avoiding unnecessary hardship. “To build an optimized, reliable, and efficient facilities environment, Gartner recommends that data center managers take a holistic approach in planning, designing, and laying out the data center to optimize power and cooling capacity.” It added, “This could include looking at all the variables from site location to building type, building systems, rack configuration, equipment deployment, and airflow dynamics must be integrated and optimized.”

Some final words from Gartner’s Bell look toward the future: “Although the power and cooling challenges will not be a perpetual problem, it is important for data center managers to focus on the electrical and cooling issue in the near term, and adopt best practices to mitigate the problem before it results in equipment failure, downtime, and high remediation costs.”

PATRICK McLAUGHLIN is chief editor of Cabling Installation & Maintenance.

Click here to enlarge image

Among current solutions designed to meet increased power densities from blade servers and data center demands, the InfraStruXure InRow RP cooling unit from American Power Conversion (www.apc.com) features an architecture that can save energy and available power by eliminating the need for constant-speed fans. Available in both chilled water and refrigerant-based designs, the InRow RP cooling unit places cooling next to the heat source, and can support power densities of up to 70 kW per rack when used with hot-air containment systems. An integrated humidifier provides room-level moisture control, preventing damage to electronic equipment via static electricity, while additional moisture control is provided through a dedicated dehumidification cycle and a standard reheat coil.

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...