Researchers at the United States Department of Energy’s Lawrence Berkeley National Laboratory (www.lbl.gov) recently collaborated with the Silicon Valley Leadership Group (SVLG; svlg.net) to present case studies of energy-efficient data center administration. The SVLG has partnered with the California Energy Commission to encourage group member-companies to demonstrate new or underused energy-efficiency strategies for data centers. Companies including Intel, IBM, Hewlett-Packard, Sun Microsystems, NetApp, and Oracle have participated. Several of these case studies were presented at an October 15, 2009 event called the Data Center Energy Efficiency Summit, hosted by NetApp.
Data centers are one of the fastest-growing energy users according to an EPA study, which was led by Berkeley Lab scientists, the lab stated when announcing the October event. William Tschudi, project manager in the Berkeley Lab Environmental Energy Technologies Division’s (EETD) application team, said the following of the efficiency efforts: “These demonstrations are taking place in corporate facilities in Silicon Valley, with major partners, both on the equipment supply and user side. SVLG is trying out different technological approaches, determining which ones work and which don’t, and publishing the results so that data center managers can evaluate the case studies and decide what works for their facilities.”
One of the case studies presented comes from an Intel data center in Santa Clara. There, the engineering team uses temperature sensors currently deployed in servers to control the ambient temperature in the data center. The goal was to show how to access these sensors and use them to directly control computer room air conditioning.
“The temperature sensor data is available on the IT network,” said Geoffrey Bell of Berkeley Lab’s EETD. “The challenge for this project was to connect the data to the computer room air handler’s control system.
“The team developed a control strategy in which the chilled water flow and the fans in the computer room air handlers are controlled separately. Using the existing sensors in the IT equipment eliminates an additional control system, and providing optimal cooling saves a significant amount of energy.”
Participants report the project was successful in demonstrating that the temperature sensors within IT equipment can be used to increase the efficiency with which temperature is regulated in a data center equipment room. According to the lab, manufacturers of IT equipment agree that the temperature at the inlet of the server, at the server’s front panel, is the figure that should control the operation of air conditioning equipment. In most data centers today, the temperature is measured at the return to the computer room air handler or air conditioner.
Intel collaborated with IBM, Hewlett-Packard, Emerson, Wunderlich-Malec Engineers, FieldServer Technologies, and Berkeley Lab to install the necessary components and develop the new control scheme. The next step will be to develop an optimized control system using the internal sensor data as input, which the team hopes could help realize an energy savings of 30 to 40 percent of a data center’s cooling energy.
Another success story from Berkeley Lab and SVLG comes from the California Franchise Tax Board’s data center in Sacramento. Lab researchers worked with data automation software and hardware (DASH) control systems, as well as a wireless sensing network from Federspiel Controls, to demonstrate how dynamic data center cooling can save money.
The control system uses wireless sensors and Web-based software to control computer room air handling units. The DASH software could dynamically turn off 6 to 8 of the 12 cooling units while ensuring that inlet air temperatures were within the recommended temperature range.
Other energy-reducing measures included rearranging floor tiles, installing variable-frequency fan drives, and installing blanking panels to contain hot air in the aisles between equipment racks. All these measures are in a guide to best practices for data center energy efficiency developed by researchers from Berkeley Lab.
Those working on the Franchise Tax Board’s data center made these changes incrementally to compare the effect of each measure on energy performance. Overall, they report, the project saves more than 475,000 kilowatt hours per year, which is 21.3 percent of the facility’s baseline total energy consumption. The DASH system saved 15.2 percent with a payback time of just under one year after rebates. Overall the energy reductions eliminate more than 40 tons of carbon-dioxide emissions per year. The total project, including best practices, saves close to $43,000 annually and cost $134,000 for a payback time of 2.25 years after rebates.
Collocation case study
At the October 15 summit, James Kennedy, senior facility manager at Sacramento hosting facility RagingWire Enterprise Solutions, presented a case study of the measures taken at his facility. RagingWire currently has a single facility, with another one under construction and a third soon to break ground, according to Kennedy. In his presentation, entitled “Maximizing cooling efficiency in a concurrently maintainable and fault-tolerant data center,” Kennedy emphasized that every energy-efficiency measure he took had to meet RagingWire’s reliability and redundancy requirements.
He also emphasized that it was important for him to be able to make measurements dynamically, because as a hosting facility RagingWire consistently deals with customer move-ins as well as changes with existing customers. The company found a wireless sensor system from SynapSense fit its needs, particularly for improving its ability to monitor static pressure under the data floor.
RagingWire’s 200,000-square-foot facility has a four-foot raised floor served by 154, 30- and 40-ton computer room air handling units. By incorporating such measures as sealing the data floor including power-distribution units and unnecessary holes, as well as the use of cold locks, RagingWire raised the average static pressure from 0.06 to 0.115 inches across the floor.
“Everyone should install monitoring,” Kennedy said during his presentation. “Seal the raised floor. Maintain static pressure under the floor,” he continued. “Once we sealed holes and equipment on the floor, we doubled our static pressure.”
The floor-sealing initiative meant changes not only in equipment and monitoring, but in personnel practices as well. As a hosting facility, RagingWire has many individuals working in its facility from time to time. “When electricians would go under the floor, they didn’t think twice about pulling up three, four, or five tiles,” Kennedy said. Such disruption of under-floor static pressure wreaks havoc on cooling efforts.
RagingWire also installed chimneys atop computer room air handlers. “We have 32-foot ceilings,” Kennedy explained. The chimneys raised the top of the air handlers from 6 feet to 12 feet, and raised the CRAC intake temperature by 5 degrees Fahrenheit. The SynapSense sensors played a role in this part of the project as well. “The wireless sensors are pretty easy to place in the roof,” Kennedy said. “We know what the thermodynamic layers look like, and have 12-foot versus 6-foot returns.
“The goal should always be getting as hot air as possible back to your CRACUs,” he concludes, and the wireless sensors helped RagingWire achieve that objective.
Berkeley Lab’s collaboration with SVLG and participation in the demonstration projects is funded by the California Energy Commission’s Public Interest Energy Research program. Berkeley Lab is a U.S. Department of Energy national laboratory. It conducts unclassified scientific research and is managed by the University of California for the Department of Energy Office of Science.