Like data center environments, cooling approaches run the gamut

Sept. 1, 2018
From Google’s high-profile liquid cooling approach to techniques employed at the edge, the options for keeping facilities climate-controlled vary.

From Google’s high-profile liquid cooling approach to techniques employed at the edge, the options for keeping facilities climate-controlled vary.

Recently several data-center-specific media outlets reported that Google deployed liquid cooling technologies in its facilities. Among those reporting were datacenterknowledge.com’s Yevgeniy Sverdlik, who explained that the power consumption of the company’s third generation of artificial intelligence circuits, Tensor Processing Units, forced the change from air cooling to liquid cooling. “Ever since TPU 3.0 was introduced internally, Google data center engineers have been busy retrofitting infrastructure to accommodate direct-to-chip liquid cooling—and they’ve had to move fast,” Sverdlik says, adding, “Liquid cooling is now being deployed in many Google data center locations.”

As is the case with data centers, there are the hyperscale- or cloud-level facilities operated by the likes of Google, Amazon, Alibaba, Microsoft, Apple and others … and then there’s everyone else, whose computing, powering, and cooling needs can be dwarfed by the giants. Nonetheless, each data center facility is mission-critical to its owner, and cooling is a primary consideration for all of them. Plus, “everybody else” isn’t a single demographic. Colocation, enterprise, and edge facilities each present separate opportunities and challenges.

This photo was taken at the opening of Lefdal Mine Datacenter, May 10, 2017. The overall modular concept eventually will accommodate up to 1,500 containers with a cooling output of about 200 megawatts.

Colo and whitespace

One colocation facility that counts energy-efficient cooling among its selling points is the Lefdal Mine Datacenter (LMD), located in Norway. It opened in May 2017 with a 45-megawatt (MW) cooling capacity available to its customers. When announcing its opening and first customers, LMD said its “flexibility is unique in terms of available space as well as different technical solutions. The large space and capacity—16-meter roof height in the mountain halls—and the related logistics allow different cost-effective product solutions. LMD has a potential of 120,000 square meters of white space and 200-plus MW IT capacity, delivered in container solutions or traditional white space. This enables a pay-as-you-grow model.”

Jorne Skaane, LMD’s chief executive officer, commented, “We can facilitate all known concepts for white space solutions, and the facility structure enables a streamlined solution for containers in different shapes and sizes. We can also customize power density, temperature, humidity, operation equipment, Tier level, and all related services, ensuring the right solution for all our customers.”

The facility is located “next to a deep, cold fjord with a stable and ample supply of price-leading, CO2-neutral hydroelectric energy,” LMD added. “It has excellent links to the local road network, shipping port, communication and fiber networks, two local airports and a helipad on site.”

LMD collaborated with IBM and Rittal to develop both container and whitespace facility offerings to customers. At the time of the facility’s opening, Rittal explained that water from the fjord close to the facility is the source of its cooling. “The energy costs are low, and the system achieves a Power Usage Effectiveness of 1.15,” Rittal said.

Rittal’s chief executive officer Karl-Ulrich Kohler, added, “The Lefdal Mine Datacenter project is impressively demonstrating how convenient it can be to establish a secure, efficient and cost-effective data center in a very short time. This solution’s high degree of standardization combined with the location advantages of the western coast of Norway result in an excellent total-cost-of-ownership analysis. Significant cost savings of up to 40 percent can be achieved compared to a cloud data center, for example, in Germany.”

Collaborative development

When announcing new cooling systems for colocation facilities in late 2017, Vertiv stated, “Nearly 60 percent of enterprise data center managers report that they will increase their use of colocation and cloud hosting over the next 12 months, and cost and scalability are in the top four attributes customers use to select a colocation provider.” The company cited its own Colocation Data Center Usage Report as the source of this information. Vertiv introduced the Liebert EFC Indirect Evaporative Freecooling Solution (400 kW capacity), the Liebert DSE Packaged Freecooling System (400 kW capacity), and the Liebert DSE Freecooling System (250 kW capacity) at that time.

Digital Realty’s vice president of global design, Kevin Dalton, said, “Digital Realty and Vertiv co-developed the new Liebert DSE 250-kW solution as an extension of the Liebert DSE pumped refrigerant technology that we used in our data centers for more than four years. Our solution helps us achieve our sustainability objectives and better serve our customers with a cooling technology that reduces energy consumption, eliminates water usage for cooling and stabilizes the data center thermalenvironment.”

As always, a common thread among cooling solutions is efficient energy consumption—whether the facility is a hyperscale, colocation, enterprise or edge data center. On May 17, U.S. President Trump signed Executive Order 13834 Regarding Efficient Federal Operations. On its CrossConnect Blog, Chatsworth Products Inc. (CPI) summarizes some of this order’s implications. The order, CPI, says, “establishes streamlined goals for federal energy efficiency, renewable energy efficiency, and other aspects of managing operations of federal buildings.

“Through the executive order, the administration will drive continued action and focus on increasing efficiency of facilities and accomplishing goals in a manner that increases efficiency, eliminates unnecessary use of resources, and protects theenvironment.”

The order specifically lists the following goals, among others.

  • Achieve and maintain annual reductions in building energy use and implement energy efficiency measures that reduce costs
  • Ensure that new construction and major renovations conform to applicable building energy efficiency requirements and sustainable design principles
  • Consider building efficiency when renewing or entering into leases; implement space utilization and optimization practices; and annually assess and report on building conformance to sustainability metrics

CPI then noted, “Considering data centers are among the most energy-hungry facilities, both federal and private data centers can benefit from current energy efficiency strategies.” The company pointed specifically to its passive cooling solutions, and power distribution units (PDUs) that can be monitored and managed, as technologies that can help facilities achieve these objectives.

The company’s passive cooling solutions “offer innovative airflow management techniques, helping data centers maximize their cooling efficiencies without the need for additional CRAC [computer room air conditioning] units, in-row air conditioners or liquid cooling solutions,” according to CPI.

Additionally, CPI’s eConnect brand PDUs enable data center managers to “boost operational efficiency by managing and monitoring power at the rack and device level,” the company says. “Power management within the data center—and particularly inside a cabinet—is critical to ensuring availability of all IT applications, as well as to minimize the overall energy footprint of the data center.”

Chatsworth Products Inc.’s eConnect power distribution units can be monitored and managed as part of a facility’s energy-efficiency efforts.

At the edge

As part of IHS Markit’s ongoing data center market analysis, analyst Maggie Shillington recently issued a brief titled “Designing for the Edge.” In it, she writes, “Edge has become an exciting buzzword for data center infrastructure manufacturers. However, for suppliers of edge deployments, the power required and the location of deployment are the most important considerations for product design—not solely whether or not it is an edge deployment.”

Her analysis further explains that A) how harsh the environment is, and B) how remote it is, dictate much about the product characteristics that will determine the selection of a specific product type or brand. And cooling comes into play as well. “Active cooling can be placed in an enclosure for high-density applications, like high-performance computing or in environments where the temperature is uncontrollable,” Shillington explains. “In addition to active cooling within the enclosures, the rPDU [rack power distribution unit] can be leveraged to provide environmental monitoring. These environmental monitors can measure the temperature and humidity outside the cabinet, which can be extremely important for dense power applications, which often push the limits of both.”

In her reporting on edge computing, IHS Markit’s Shillington also discusses newly introduced products, technologies, and systems that are noteworthy. In her most recent reporting, Shillington referenced Vapor IO, which she explains “believes that while software is vital for edge applications, the related hardware is equally important.” The company’s Vapor chamber “encapsulates this dual focus by creating an edge infrastructure solution that is empowered by the software,” Shillington says. “The Vapor Chamber can contain up to six individually accessible wedges to keep users’ IT equipment secure from others being able to access it.”

Each wedge in the cylindrical, 7-feet-high, 9-feet-wide chamber can accommodate 36 rack units of equipment and as much as 25 kW of IT load.

In May, Vapor IO and BasX Solutions announced a cooling system they say was custom-developed for edge environments. “By integrating BasX’s patent-pending hyperscale-grade cooling system with Vapor IO’s Vapor Edge Module, the companies are able to deliver the smallest-footprint multi-tenant micro data center with more than 150 kW of critical IT load in edge environments,” they said. “The solution provides all the benefits of free and economized cooling, which is today’s gold standard, with none of the drawbacks associated with outside air economization or evaporative waterconsumption.”

They said the first of these modules would be deployed in the third quarter of this calendar year, and they expected to be able to deploy them at scale by the fourth quarter.

“BasX has specially adapted their hyperscale cooling technology to the Vapor Edge Module, reconfiguring it for a small footprint and remote operation in rugged edge locations,” commented Cole Crawford, Vapor IO’s founder and chief executive officer.

The companies said that when developing this system for edge facilities, they designed according to the following realities about edge computing. 1) Edge locations lack access to on-site water supply, making some water-consuming cooling options untenable. 2) Because edge data centers must exist in a small footprint, a fully integrated rather than bolted-on cooling system is advantageous. 3) Component-level redundancy is important because of many edge facilities’ hard-to-reach locations and lights-out status. 4) Despite all the above, edge facilities still must be capable of powering and cooling substantial amounts of IT equipment.

“We had to uniquely configure our cooling system to perform in that compact space and create safe and exact air-handling conditions in frequently rugged conditions,” said Matt Tobolski, BasX president and co-founder.

Cooling systems and approaches, like data center facilities, are varied and often customized. Nonetheless, Bill Kleyman, director of technology solutions for EPAM Systems and a regular blogger for Upsite Technologies, recently wrote in an Upsite blog post, “Whether you’re working with a primary data center or an edge location, you need to take into consideration the way you’re delivering power and cooling to your critical systems. The good news is that vendors and partners see the increase concerning rack density and have capable systems which can meet the demands of the modern data center.

“This means working with better rack-ready cooling solutions, improving airflow within the data center, and even applying modular cooling solutions,” Kleyman advised.u

Patrick McLaughlin is our chief editor.

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond