Data center placement, FO network design key to more efficient cloud computing: Researcher

Feb. 14, 2017
An American researcher has developed a mathematical model that offers the potential to improve the flow of internet traffic generated by cloud computing by optimizing data center placement and utilizing distance-adaptive transmission technology. The scholarly work will be presented at the upcoming OFC 2017 tradeshow in Los Angeles.

An American researcher has developed a mathematical model that offers the potential to improve the flow of internet traffic generated by cloud computing by optimizing data center placement and utilizing distance-adaptive transmission technology. The scholarly work will be presented at the upcoming OFC 2017 tradeshow in Los Angeles.

Telecommunication experts estimate the amount of data stored “in the cloud” or in remote data centers around the world, will quintuple in the next five years. Whether it’s streaming video or business’ database content drawn from distant servers, all of this data is -- and will continue in the foreseeable future to be – accessed and transmitted by lasers sending pulses of light along long bundles of flexible optical fibers.

Traditionally, the rate at which information is transmitted does not consider the distance that data must travel, despite the fact that shorter distances can support higher rates. Yet as the traffic grows in volume and uses increasingly more of the available bandwidth, or capacity to transfer bits of data, researchers have become increasingly aware of some of the limitations of this mode of transmission. New research from Nokia Bell Labs in Murray Hill, New Jersey may offer a way to capitalize on this notion and offer improved data transfer rates for cloud computing based traffic.

The results of this work will be presented at the Optical Fiber Communications Conference and Exhibition (OFC), held next month in Los Angeles, CA. “The challenge for legacy systems that rely on fixed-rate transmission is that they lack flexibility,” says Dr. Kyle Guan, a research scientist at Nokia Bell Labs. “At shorter distances, it is possible to transmit data at much higher rates, but fixed-rate systems lack the capability to take advantage of that opportunity.”

Guan says he worked with a newly emerged transmission technology called “distance-adaptive transmission,” where the equipment that receives and transmits these light signals can change the rate of transmission depending on how far the data must travel. With this, he set about building a mathematical model to determine the optimal lay-out of network infrastructure for data transfer.

“The question that I wanted to answer was how to design a network that would allow for the most efficient flow of data traffic,” adds Guan. “Specifically, in a continent-wide system, what would be the most effective [set of] locations for data centers and how should bandwidth be apportioned? It quickly became apparent that my model would have to reflect not just the flow of traffic between data centers and end users, but also the flow of traffic between data centers.”

External industry research suggests that this second type of traffic, between the data centers, represents about one-third of total cloud traffic. It includes activities such as data backup and load balancing, whereby tasks are completed by multiple servers to maximize application performance. After accounting for these factors, Guan ran simulations with his model of how data traffic would flow most effectively in a network.

“My preliminary results showed that in a continental-scale network with optimized data center placement and bandwidth allocation, distance-adaptive transmission can use 50 percent less wavelength resources or light transmission, and reception equipment, compared to fixed-rate rate transmission,” he explains. “On a functional level, this could allow cloud service providers to significantly increase the volume of traffic supported on the existing fiber-optic network with the same wavelength resources.”

Guan recognizes other important issues related to data center placement. “Other important factors that have to be considered include the proximity of data centers to renewable sources of energy that can power them, and latency -- the interval of time that passes from when an end user or data center initiates an action and when they receive a response,” he concludes. Guan adds that his future research will involve integrating these types of factors into his model so that he can run simulations that even more closely mirror the complexity of real-world conditions.

OFC 2017 will be held from March 19-23 at the Los Angeles Convention Center.

Search the Cabling Installation & Maintenance Buyer's Guide for companies, new products, press releases, and videos:

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

Revolutionize Your Network with Propel Fiber Modules

Oct. 24, 2023
Propel Fiber Modules are your gateway to the future of connectivity.