By JIM THEODORAS, ADVA Optical Networking -- Now that "DCI" has become a "thing," a well-known acronym with a wide range of connotations, "data center interconnect" has gone from being unknown and somewhat misunderstood to overused and overhyped. But what exactly the term DCI constitutes is changing all the time. What it meant last month might not be what it means in a few weeks' time. With that in mind, let's try to take a snapshot of this rapidly evolving beast. Let's attempt to pin down what DCI currently is and where it's headed...
...At the risk of oversimplifying DCI, it involves merely the interconnection of the routers/switches within data centers over distances longer than client optics can achieve and at spectral densities greater than a single grey client per fiber pair (see figure below). This is traditionally achieved by connecting said client with a WDM system that transponds, multiplexes, and amplifies that signal. However, at data rates of 100 Gbps and above, there is a performance/price gap that continues to grow as data rates climb.
The 100 Gbps and above club is currently served by coherent optics that can achieve distances over 4,000 km with over 256 signals per fiber pair. The best a 100G client can achieve is 40 km, with a signal count of one. The huge performance gap between the two is forcing DCOs to use coherent optics for any and all connectivity.
Using 4,000-km transport gear to interconnect adjacent data center properties is a serious case of overkill (like hunting squirrels with a bazooka). And when a need arises, the market will respond. In this case, with direct-detect alternatives to coherent options.
Read the full article at Lightwave Online