In-cabinet thermal management in edge computing environments
Pointes from Chatsworth Products and Rittal on maintaining thermal efficiency inside cabinets in edge computing facilities.
Providers of cable management systems offer resources, systems for edge computing applications.
By Patrick McLaughlin
The emergence and growth of edge computing creates specific challenges for thermal management. Computing environments with few cabinets/enclosures may have different thermal requirements than larger server farms, but their requirements are no less critical. This article cites recent blog posts from Chatsworth Products Inc. (CPI) and Rittal intended to convey recommendations for successful thermal management in edge computingenvironments.
In its post dated early November 2017, Rittal explains, “Edge computing has risen to the forefront of information management in a very short time. Edge computing houses data processing capability at or near the ‘edge’ of a network. The efficiency of this data management is well-documented. It is difficult to argue that accessing real-time data for analysis is superior to later data retrieval. Latency of mission-critical data is virtually eliminated. The decreased use of bandwidth and the elimination of data bottleneck to the main data center or cloud enhances productivity and cost savings.”
The company then provides the other side of that scenario: “As worthy as these benefits may be, IT will face new challenges and tasks in edge computingimplementation.”
In a similar vein, CPI’s August 2017 blog post reminds us, “As the Internet of Things (IoT) continues to evolve and edge computing—which pushes applications, data, and computing services away from centralized data centers—becomes more common, managing assets and white space remotely becomes more challenging. A comprehensive and reliable solution that works together to simplify operation, costs and labor, as well as allows for network expansion, is key.”
Rittal adds, “Edge computing, by definition, exposes hardware to an environment that may be challenging—in footprint, ambient temperatures, contaminants, particulates, vibration or accessibility. Solutions abound for each of these concerns: micro data centers, NEMA-appropriate enclosures, thermal management and filtration systems, and shock-absorbing designs.”
Separately, the company notes, “Of all the concerns for an edge data center, cooling capacity consistently rates as a primary focus. Heat dissipation and the inherent heat problems in edge computing require modular climate control systems. Variables like temperature, humidity, the velocity and pressure of air flows, and the heat losses of the installed components are considered in development. An energy-efficient and advanced climate control and cooling concept for edge computing takes into account these variables.”
Rittal recommends that planners consider the following criteria and variants in the design phase.
- What type of cooling system should be installed (water or refrigerant-based)?
- Will the amount of rack and enclosures require hot and cold aisles?
- What average temperatures should be maintained in the racks?
- Is the volumetric flow rate of cooled air required?
- What are the ambient conditions?
- Which way will the airflow be directed?
- Do load fluctuations exist and what impact do these have on the cooling response times?
- Should the system be scalable for future expansion?
Furthermore, Rittal says, “Cooling the edge computing microcenter can be approached most effectively via a liquid cooling system, either inline based, rack based, or a combination of both. There are basically two heat transfer media with which systems can be operated: water and refrigerant. Water offers exceptional cooling properties, well-suited to the high heat output of an edge system. Refrigerant-based cooling is well-suited to small or medium edge enclosures, especially when a water supply is not readily available. Refrigerant cooling often operates on a smaller footprint, efficient in microcenters. In both cases, energy efficiency is aconsideration.”
Deploying an ecosystem
CPI recommends what it calls a “cabinet ecosystem” that addresses several edge-computing solutions. The following bullet points are taken directly from CPI’srecommendations.
- Airflow management—A cabinet should be capable of supporting and protecting valuable IT equipment while reducing energy consumption and maximizing cooling efficiency.
- Efficient power management—Boost operational efficiency by managing and monitoring power at the rack level and at the device level.
- Environmental monitoring—Further enhance efficiency through environmental monitoring, which provides the ability to remotely monitor, record, and analyze environmental security and safety conditions.
- Access control—Extend security to the rack level for better control and record-keeping of access and assets.
CPI also recommends that data center administrators “consider an easy-to-use, centralized data center infrastructure management (DCIM) solution that can autodiscover all integrated manageable hardware within the cabinets.”
Ashish Moondra, senior product manager for power, electronics, and software with CPI, comments, “A complete cabinet ecosystem approach that incorporates an integrated hardware solution, all networked through very few IP addresses and managed centrally through a plug-and-play software, is the key to simplified remote management.”
In a post on cablinginstall.com, Steven Carlini, senior director of data center offer management and marketing with Schneider Electric, observed, “Over the last decade or so network closets have become more and more critical, because as companies move more and more of their in-house applications to cloud-based services, they count on the equipment housed in those network closets to keep employees connected. Power protection, battery backup, cooling, environmental monitoring, and remote management are all of paramount importance to ensure consistent, reliable access to business-critical cloud services. Even employees working remotely are often routed through in-house VPNs by equipment in the network closet.”
Carlini added, “Today we are seeing the start of a new wave of technology inhabiting these network closets. Edge servers are being installed for data acquisition and processing for IoT applications. Think security system monitoring with biometrics and facial recognition, which uses very high definition cameras and processes lots of data. Hyperconverged servers are being installed in these closets to run virtual desktop infrastructure applications, where 200 desktops or laptops can be replicated by a single 2U server.
“I predict we’ll soon start seeing nearly every company install redundant public cloud applications that companies will locate on-premise in these network closets,” he concluded. “So if your connection to the main centralized public clouds is lost or hindered by unmanageable latency, business as usual can still be a reality.”
Patrick McLaughlin is our chief editor.
The physical layer matters too
While this article focuses on thermal management within edge data centers, of course cabling professionals must be concerned with the network’s physical-layer infrastructure as well. In a recent blog post, CommScope solutions architect Craig Culwell addresses that and more. In the post, titled “What does it take to build an edge data center?” Culwell addressees setting up such a computing environment. “You might be surprised at its complexity,” he points out. “Like most projects involving sensitive electronic equipment, an edge data center requires a lot of preparation and planning to ensure its success.”
After addressing location, heating/cooling and security, Culwell notes, “To deliver service that offers near-perfect reliability or better, you’ll want to deploy a number of redundancies to prevent unexpected downtime. These redundancies include equipment, backup power, and, if possible, multiple connections to high-speed networks. Plus, as fast and more-robust servers are being developed, you will need to build your data center infrastructure with an eye toward the future including a clear migration plan for speeds up to 400 Gbits/sec.”—P.M.