Photo: Panduit's pre-configured IDF line
By Dr. RICK PIMPINELLA, Panduit Connections Blog -- 2018 was a spectacular year for change around the data center environment. While researching my new paper –‘Light into Money – The Future of Fibre Optics in Data Centre Networks’ -- there have been various "bubbling under" technologies that have broken through and are providing the impetus to some radical cloud environments. To wit:
- Edge Computing – less edgy more mainstream – We are seeing leading businesses and organizations heavily invest in technology that will demand ‘both’ growth of centralized cloud data center services and driving the requirement for a whole new breed of Edge data centres placing compute capability where it’s needed. Placing analysis and response processing close to the source allows data users to optimize response times. The Edge is driving efficient bandwidth utilization and minimizing connections and physical reach (distance) which introduce latency into the infrastructure. Together with other data growth areas, Edge computing applications will generate petabytes of data, daily, by 2020. Systems that intelligently process data to create business advantage will be essential to our customers’ future prosperity.
- Hyperscaledata center investment – Efficiency gained on the coat-tails of giants – Industry titans, Google, Amazon, Microsoft, Facebook, Apple and Asian public cloud players Alibaba and Ten Cent are investing heavily, not only in new facilities, but the technology platforms that are enabling ever-faster data transport and processing. The global hyperscale data center market size is expected to grow from $25.08 billion in 2017 to $80.65 billion by 2022. Established businesses competing with the web-scale firms cannot afford to be constricted by legacy technologies; to remain competitive you must build new platforms and invest in the next generation Internet Protocol (IP) infrastructure.
- Solid State Storage – No Flash in the pan – Flash storage is replacing disk drives across the industry for high performance compute environments. Flash technology is on trend with the demand for higher bandwidth and low latency requirements of big data workloads. As our customers’ data volumes increase, new access and storage techniques such as Serial Storage Architecture (SSA) can deliver to eliminate data bottlenecks in the data center and in Edge environments. Flash offers a more efficient cabinet and rack footprint and far greater power efficiency over disk drives. As the requirement for storage space multiples, this is a significant advantage.
- Artificial Intelligence(AI) – Disruption driving growth – AI together with Machine Learning (ML) requires machine-to-machine communications at network speeds and at data volumes that have serious implications for network topologies and connectivity. An example of this is seen in the Ethernet switch market, which has seen incredible growth of 25 and 100 Gigabit Ethernet (GE) ports shipments. These and new higher speed Ethernet ports will be essential to the growth of AI and Machine Learning applications, as the volume of data required are in the petabyte scale. We are working with partners on high speed and high-quality infrastructure and the next generation topologies to support this data volume growth. Read more on this subject in the aforementioned white paper – Light into Money.
- Converged technology – Simplify to clarify – To build more efficient data centers it is agreed that simplified designs on flexible infrastructure platforms are required to achieve more agile organizations. We are witnessing increased automation, more integrated solutions and software-defined capabilities that are reducing the reliance on silo-systems. This allows users to take advantage of highly flexible infrastructures to drive more capacity, monitoring and analysis, and increase efficiency within the data center. Converged and hyper-converged infrastructure are taking advantage of the many of the topics discussed above to build the future cloud.
Understanding how leaders in the market are moving forward provides stepping stones for all of us to develop our platforms and data centers to take advantage of new developments. However, we must not follow blindly -- it is essential that our designs and solutions create the most effective and efficient solution for our needs, and we can only do this when we step out of the silo and view the wider opportunities.
Dr. RICK PIMPINELLA received his B.S., M.S., and Ph.D. degrees in Physics from New York University, Tandon School of Engineering in 1976, 1978, and 1981 respectively. From 1981 to 2001 he was employed with Bell Laboratories, where he pioneered the use of silicon processing technology for the fabrication of optical subassemblies for the passive alignment of optical fibers to photo-detectors and lasers. In 1994, as a Distinguished Member of Technical Staff, he designed and developed a micro-miniature expanded beam optical backplane connector, which was deployed in the U.S. Air Force F‑22 Raptor and F35 Lightning II. In 1998, as Technical Manager, he led the design and development of intelligent remote fiber test systems for monitoring and testing the outside optical fiber cable plant. He joined Panduit in 2002, managing the Fiber Business Unit and later, created and managed the Fiber Research Department. Dr. Pimpinella is a Panduit Fellow, a member of the IEEE, and is actively pursuing research interests in multimode and single-mode optical fiber. He has published over 50 technical papers and articles, and holds more than 60 U.S. Patents.