Autonomous micro data centers: Hyperscale extension infrastructure for the edge of the cloud
In a new corporate blog, R&M contends that data volume, mobility and the Internet of Things all require decentralized computing power as an extension to hyperscale data centers. The solution? Autonomous micro data centers.
Corporate blog -- R&M, the globally active developer and provider of cabling systems for high-quality network infrastructures, based in Wetzikon, Switzerland, is advocating an extended and differentiated view of cloud computing. The growth in private and business data traffic continues unabated. And the Internet of Things, 5G and mobility also need to be taken into consideration. They are causing additional exponential growth of IP traffic and require ultra low latency even in remote places. "We don't expect the hyperscale data centers that are coming into existence today to be able to fully cover network, computing and storage requirements in a few years," says R&M CMO Andreas Rüsseler. This is why new computing power at the edge of the network will have to extend and support the large, central data centers. For R&M, a natural consequence of data growth.
"Providers and network operators, cities and utilities, industry, the media and transport companies can set up the necessary infrastructures for the periphery in good time providing they start doing so now," continues Rüsseler. R&M sees a wide-scale increase in FO cabling and the installation of decentralized micro data centers as being part and parcel of the necessary infrastructure. R&M describes micro data centers as an "autonomous, automatable and sturdy solution which has to be powerful enough to assume a leading role in the cloud." R&M presented a visionary, fully automated model at Data Centre World 2018 in London.
The entire highway becomes a data center
A striking application example for edge computing is future road traffic. The scenario is based on information from the German Fraunhofer research institute. For cars to be fully automated and safe, they would have to be able to react within 0.1 milliseconds. The exchange of information with the environment, with antennas, sensors and other vehicles would effectively have to take place at the speed of light. Along with the future 5G services, that requires an FO network along the roadside. There would have to be servers or micro data centers on the roads or at base stations every 15 kilometers to guarantee virtually latency free interaction and process the most important data on site. Exchanging data using remote cloud data centers would be too slow to control traffic and ensure there were no accidents with the typical 1 to 2 milliseconds latency. "The cloud could compile, analyze and store all traffic data that is not critical in terms of time in the background. But outside at the edge, there is zero tolerance for latency and a need for unconditional availability," says Andreas Rüsseler.
Latency, hyper-interactivity and decentral intelligence all play a role with a lot of other applications in the digitalized world: for example, industrial manufacture, industrial Ethernet and robotics, 5G and video communication, smart grids, the Internet of Things (IoT) with billions of end devices as well as blockchain, AI and AR applications. Andreas Rüsseler is convinced that edge computing can support all these tasks. It shortens the path between the acquisition, collection, analysis and feedback of intelligence to the networks.
A further driving force for the creation of edge infrastructures is the sheer mass of data. Here too, road traffic is a good example. According to Data Economy Magazine a "connected car" can already send 25 Gigabytes of data to the cloud every hour. Tomorrow, autonomous cars will be generating ten times as much data. "To date such quantities simply have not been conceivable. And the data will initially have to be processed close to where it occurs, on the roadside. And here too we are going to need edge data centers. The entire highway will become a data center," says Andreas Rüsseler.
Just like road traffic, industry, tourism, trade, the health sector, financial services, consumers, 5G services and the IoT are increasingly generating and using more data. Andreas Rüsseler: "Virtually everything we do on a daily basis generates data. This data has to be transported, saved, called up and evaluated … Overall, an amount is being created which in a few years' time will only be able to be managed by spreading the load." The centralized cloud concept with relatively long and expensive transmission paths can be extended at the periphery with edge data centers. In this context, R&M refers to an analysis by Gartner and Maverick Research from 2017 which predicts that the success of digital business will be decided at the edge, not in the cloud.
Requirements at the edge and in micro data centers
The traffic example gives a first indication of the demands that will have to be made of the edge and of micro data centers and their locations. Andreas Rüsseler: "The conditions out there are pretty rough at times." To minimize risks, application sites will have to be chosen carefully. Edge solutions should be as robust and maintenance-free as possible. They should be able to run independently without specialist personnel. But there are still going to have to be safe rooms or containers which protect micro data centers from manipulation, environmental influences and electromagnetic loads.
"Installation and operation at the edge will have to be made as simple as possible. The plug & play principle applies to connectivity and IT. When people today are wanting to bring the cloud and thus the business revolution to remote locations, nobody wants to have to think about planning data centers in the old way," explains R&M Data Center Market Manager Dr. Thomas Wellinger.
Further criteria from R&M's point of view:
- Micro data centers will have to be able to be connected directly to FO or broadband networks everywhere. The connection technique should be self-explanatory.
- Cabling and IT should be delivered with the maximum density, standardized, complete, and configured to order.
- Climate-resistant, closed and shielded design.
- Integrated cooling, sound insulation, UPS, access control, remote monitoring.
- Automated Infrastructure Management (AIM) to remotely monitor and document cabling and IT assets.
- Cabinets and containers should be able to be linked and stacked to be able to scale the infrastructure as required.
Paradigm shift in network planning
"The edge trend is leading to a paradigm shift in the way we are going to have to design, provide and monitor networks," adds R&M Data Center Market Manager Dr. Thomas Wellinger. For example, specific security, connectivity and bandwidth requirements will have to be taken into consideration. Infrastructures will have to be designed to be able to spread computing power on a wide scale and support software defined WAN. The infrastructure providers will have to adapt their business models. Andreas Rüsseler explains: "There cannot be any bottlenecks between the edge and the hyperscale data center."
R&M predicts that the base stations of cellular phone network providers will be particularly suitable as sites for edge data centers. Because with the introduction of 5G technology, mobile communication antennas will become locks for enormous amounts of data. Hubs or gateway exchanges of cable and telecommunication networks, public utility stations, wind power plants and solar parks, railway stations and highway service areas, factory premises and large-scale commercial buildings are also a possibility. "Lots of companies could replace older enterprise data centers with powerful edge data centers. They can be used for a range of functions – for private and hybrid cloud, as a resource for external users or even as heating," is one of R&M's suggestions.
R&M also feels that there are further benefits to decentral mini or micro data centers and secure connectivity at the edge of the cloud:
- They reliably connect IoT devices on short links and can easily be scaled when local IoT networks grow.
- They can be the backbone of a smart city infrastructure.
- They can replicate cloud service and business-critical processes on site and buffer bandwidth-intensive applications such as mobile HD video.
- If cloud connections falter or fail altogether, the networks, servers, memories and devices at the edge continue to work.
- Edge data centers can form geo-redundant groups if they are sufficiently networked and thus promote the security and availability of the cloud and even stave off attacks.
Andreas Rüsseler sums up by emphasizing: "The exponential growth of data from the various applications and devices which can be found everywhere is forcing us to rethink today's network structures. Weak subnetworks can slow down the entire communication chain." Bottlenecks in network interfaces, transmission and computing capacities are to be avoided at all levels to be able to guarantee a smooth flow of data traffic. "The separation between the subnetworks from the cloud data center through fixed and mobile communication services to structured cabling is gradually disappearing. The networks are ultimately going to merge," says Andreas Rüsseler. He refers to new kinds of FO cabling solutions which help solve bottlenecks and create network structures for the future. These solutions for data centers, WAN, backhaul, access networks, FTTH and structured cabling are extremely scalable. They are easy to install, maintain and extend, but also offer the opportunity to automate network monitoring.
Hyperscale and edge are complementary
Alongside the trend toward edge computing, the hyperscale segment will continue to develop dynamically in the current financial year according to R&M observations. Andreas Rüsseler once again stresses: "Edge computing and the central cloud infrastructures can only be developed together. They have to complement one another."
Digital business models such as virtualization and software defined data centers require immense, adaptable computing resources in the cloud, storage capacities and high transmission speeds. These market requirements can be served long term, flexibly, innovatively and in a failsafe manner with the hyperscale data center concept. Hyperscale data centers are currently mainly occurring at strategically important places and Internet modes or at places with an inexpensive energy supply.
The concept is based on hyper convergence as well as on greatly and freely scalable, high density architectures. There has to be a mass of optical fibers and connectivity from the beginning to avoid bottlenecks. High-count fiber cables with more than 4,000 fibers are typically needed to cover the connectivity requirement. With the Building Entrance Facility BEF-60, launched in 2017, R&M has come up with an innovation for this task. Capacity-wise the cabinet offers 23,040 splices over 60 drawers. The BEF-60 serves as an entry point for hyperscale data centers with scalable spine and leaf topologies.
With the Ultra High Density platform Netscale, R&M is also setting a standard for the density of fiber optic ports within data centers. With up to 120 ports per height unit in the 19’’ rack, this patch panel achieves the highest density of all familiar products. Modular switches with a high port density are important components for creating cloud infrastructures in hyperscale data centers. R&M has developed the Netscale Blade Cabling Manager (BCM) for the efficient and correct addressing of vertical and horizontal modular switches. It routes cables directly from the switch ports to the patch panel ports which are below, above or beside the switch. This means cable management can be done without in the cabinet.
Hyperscale data centers accommodate hundreds of thousands of fiberoptic connections in sensitive operating environments. These quantities can no longer be managed in a traditional way. They have to be monitored fully automatically to be able to guarantee operational reliability. One solution for this is the intelligent infrastructure management system R&MinteliPhy. It consists of sensor strips for monitoring connections, an appliance for the concentration and transfer of signals as well as the server with administration, monitoring and planning functions. R&MinteliPhy can be used for controlling and asset management, for the mapping of the digital twin of the infrastructure as well as for analyses, audits, 3D visualization, DCIM and the development of risk scenarios. So it not only supports technical management, but also compliance and economy management.