By Andreas Sila,
The next wave of technological innovation is already here with new applications emerging daily that change the way we live, work and travel. The huge adoption of these new services worldwide is driving exponential growth in the demand for data, leading data center operators to re-evaluate the deployment of networks and data centers. With 5G on the horizon, enabling a host of new applications, greater network capacity and faster speeds are becoming more and more crucial. This is bringing networks closer to the edge, and deploying larger numbers of smaller distributed data centers seems to be the most practical and logical solution.
According to Gartner, there will be more than 25 billion devices located at the edge by 2021—and this scale of adoption is growing at an unprecedented rate. Coupled with the emergence of next-generation technologies such as virtual reality (VR), augmented reality (AR), autonomous driving, machine-to-machine (M2M) communication, smart cities and smart factories, this is placing more demand on networks to deliver greater speed and bandwidth capabilities.
This, in turn, is making data center operators’ jobs more complex. It used to be very simple, with one single data center location capable of handling all data needs, but the rise of technology and the IoT era have brought with them a fundamental change for data center operators. The ever-increasing number of devices is creating a vast amount of data from things, people and machines and this data needs to be processed as quickly as possible. With limitations such as bandwidth requirements and increasing demands, latency, privacy and autonomy, data center operators need to look at ways to remove these boundaries to ensure they can keep up with the growing demands.
Bringing data closer to the edge
To keep pace with the exponential increase in connected devices, the deployment of networks and data centers must match the scale of growth. As a more connected society propels forward, it is critical for networks to deliver high capacity and speeds alongside real-time data transference. To achieve this, the network needs both centralized large-scale data centers and decentralized micro data centers located much closer to end-user devices.
These micro data centers, also known as edge data centers, address the limitations of centralized computing (such as latency and bandwidth) by moving processing closer to the source of generation, things and users. Edge computing augments and expands the possibilities of today’s primarily centralized hyperscale cloud model, supporting the systematic evolution and deployment of the IoT and the new applications that it can bring.
Edge data centers facilitate reliable connectivity across billions of connected devices using the same network. Moving the process of data closer to the edge reduces the distance the data must travel, helping to strengthen network performance and reduce network latency. It also brings computing power to the edge so the data can be processed or stored closer to the location it must be delivered, reducing computing needs on the cloud data center and making the network more efficient in general. The quality of the network performance is vital to deliver new applications that require rapid response times and low latency to operate.
For enterprises to capitalize on the data and business insights of IoT, analytics will need to be closer to the edge for near-real-time feedback, business process optimization and to avoid the costly impacts of latency and insufficient bandwidth. Edge data centers deployed on a large scale create strong network connectivity, ranging from small clusters of edge cloud resources located within street infrastructure to containerized plug-and-play data centers to be placed where needed.
Small but mighty
For edge data centers to live up to their potential, a number of specific requirements need to be taken into consideration, directly impacting the design, size, costs and location. As a result, there are many different types of edge data centers on the market with each one meeting individual network needs—for example, network slicing, operating with different requirements and bringing about their own unique challenges that affect network performance. This makes consideration on a case-by-case basis essential.Edge data center location is critical to deliver high-quality, ultra-low-latency network performance. In order to operate at maximum network performance, they must be placed in close proximity to the end user. The latency requirements of the end user, application or service it supports will determine the most suitable location.
With only limited space available, edge data centers must be versatile and designed with size and scalability in mind. Consisting of the same infrastructure as that found in traditional data centers but on a much smaller scale, edge data centers rely on an efficient cable management system to protect, route and manage cables to enable various density requirements and maximum network capacity and performance.
These cabling solutions must provide a complete and comprehensive backbone using only limited space, which enables longevity for future upgrades.
With edge data centers being operated remotely and deployed across a wide location, additional sensors and monitoring systems are also required for each one, with the next closest edge data center able to provide redundancy. Automated systems and software-defined networking are crucial to successfully operate and implement a broad network of edge data centers to manage and monitor connections, power and temperature levels for consistent reliable network performance.
Setting the standard for edge
As technology advances and new applications emerge, more challenges are set to arise in the future. The planning of edge data centers from the design and location to building types and natural hazards is imperative, and developing standards to create a distributed infrastructure that works together is critical.
According to Bell Labs, 60% of servers will be placed in edge data centers by 2025. With this in mind, standardization bodies such as the Telecommunications Industry Association (TIA) are working on defining the differences between traditional data centers and edge data centers.
The results of these efforts will be the creation of industry standards in relation to availability, power, cooling, physical security and critical cabling systems.
Preparing for the unknown
To deliver a consistent and reliable network performance, which can keep up with the ever-increasing number of connected devices, a shift in the deployment of data centers is needed. Traditional data centers and edge data centers are complementary rather than competitive or mutually exclusive. Enterprises that use them together will benefit from synergies of solutions that maximize the benefits of both centralized and de-centralized models.
There will be a significant movement toward large hyperscale cloud data centers, and the amount of traditional self-owned and -operated enterprise data centers will decrease. Alongside this, edge data centers are also expected to see a large deployment in the near future, in line with the effort to match the high-bandwidth, low-latency demands of new applications and the billions of new connected IoT devices that are coming.
Andreas Sila is data center market manager at Huber + Suhner.