In-cabinet thermal management in edge computing environments

Jan. 29, 2018
Pointers from Chatsworth Products and Rittal on maintaining thermal efficiency inside cabinets in edge computing facilities.
Providers of cable management systems offer resources, systems for edge computing applications.

By Patrick McLaughlin

The emergence and growth of edge computing creates specific challenges for thermal management. Computing environments with few cabinets/enclosures may have different thermal requirements than larger server farms, but their requirements are no less critical. This article cites recent blog posts from Chatsworth Products Inc. (CPI) and Rittal intended to convey recommendations for successful thermal management in edge computingenvironments.

In its post dated early November 2017, Rittal explains, “Edge computing has risen to the forefront of information management in a very short time. Edge computing houses data processing capability at or near the ‘edge’ of a network. The efficiency of this data management is well-documented. It is difficult to argue that accessing real-time data for analysis is superior to later data retrieval. Latency of mission-critical data is virtually eliminated. The decreased use of bandwidth and the elimination of data bottleneck to the main data center or cloud enhances productivity and cost savings.”

Edge challenges

The company then provides the other side of that scenario: “As worthy as these benefits may be, IT will face new challenges and tasks in edge computingimplementation.”

In a similar vein, CPI’s August 2017 blog post reminds us, “As the Internet of Things (IoT) continues to evolve and edge computing—which pushes applications, data, and computing services away from centralized data centers—becomes more common, managing assets and white space remotely becomes more challenging. A comprehensive and reliable solution that works together to simplify operation, costs and labor, as well as allows for network expansion, is key.”

Rittal adds, “Edge computing, by definition, exposes hardware to an environment that may be challenging—in footprint, ambient temperatures, contaminants, particulates, vibration or accessibility. Solutions abound for each of these concerns: micro data centers, NEMA-appropriate enclosures, thermal management and filtration systems, and shock-absorbing designs.”

Separately, the company notes, “Of all the concerns for an edge data center, cooling capacity consistently rates as a primary focus. Heat dissipation and the inherent heat problems in edge computing require modular climate control systems. Variables like temperature, humidity, the velocity and pressure of air flows, and the heat losses of the installed components are considered in development. An energy-efficient and advanced climate control and cooling concept for edge computing takes into account these variables.”

Rittal recommends that planners consider the following criteria and variants in the design phase.

  • What type of cooling system should be installed (water or refrigerant-based)?

  • Will the amount of rack and enclosures require hot and cold aisles?

  • What average temperatures should be maintained in the racks?

  • Is the volumetric flow rate of cooled air required?

  • What are the ambient conditions?

  • Which way will the airflow be directed?

  • Do load fluctuations exist and what impact do these have on the cooling response times?

  • Should the system be scalable for future expansion?

Furthermore, Rittal says, “Cooling the edge computing microcenter can be approached most effectively via a liquid cooling system, either inline based, rack based, or a combination of both. There are basically two heat transfer media with which systems can be operated: water and refrigerant. Water offers exceptional cooling properties, well-suited to the high heat output of an edge system. Refrigerant-based cooling is well-suited to small or medium edge enclosures, especially when a water supply is not readily available. Refrigerant cooling often operates on a smaller footprint, efficient in microcenters. In both cases, energy efficiency is aconsideration.”

Deploying an ecosystem

CPI recommends what it calls a “cabinet ecosystem” that addresses several edge-computing solutions. The following bullet points are taken directly from CPI’srecommendations.

  • Airflow management—A cabinet should be capable of supporting and protecting valuable IT equipment while reducing energy consumption and maximizing cooling efficiency.

  • Efficient power management—Boost operational efficiency by managing and monitoring power at the rack level and at the device level.

  • Environmental monitoring—Further enhance efficiency through environmental monitoring, which provides the ability to remotely monitor, record, and analyze environmental security and safety conditions.

  • Access control—Extend security to the rack level for better control and record-keeping of access and assets.

CPI also recommends that data center administrators “consider an easy-to-use, centralized data center infrastructure management (DCIM) solution that can autodiscover all integrated manageable hardware within the cabinets.”

Ashish Moondra, senior product manager for power, electronics, and software with CPI, comments, “A complete cabinet ecosystem approach that incorporates an integrated hardware solution, all networked through very few IP addresses and managed centrally through a plug-and-play software, is the key to simplified remote management.”

On-premise implications

In a post on cablinginstall.com, Steven Carlini, senior director of data center offer management and marketing with Schneider Electric, observed, “Over the last decade or so network closets have become more and more critical, because as companies move more and more of their in-house applications to cloud-based services, they count on the equipment housed in those network closets to keep employees connected. Power protection, battery backup, cooling, environmental monitoring, and remote management are all of paramount importance to ensure consistent, reliable access to business-critical cloud services. Even employees working remotely are often routed through in-house VPNs by equipment in the network closet.”

Carlini added, “Today we are seeing the start of a new wave of technology inhabiting these network closets. Edge servers are being installed for data acquisition and processing for IoT applications. Think security system monitoring with biometrics and facial recognition, which uses very high definition cameras and processes lots of data. Hyperconverged servers are being installed in these closets to run virtual desktop infrastructure applications, where 200 desktops or laptops can be replicated by a single 2U server.

“I predict we’ll soon start seeing nearly every company install redundant public cloud applications that companies will locate on-premise in these network closets,” he concluded. “So if your connection to the main centralized public clouds is lost or hindered by unmanageable latency, business as usual can still be a reality.”

Patrick McLaughlin is our chief editor.

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...