Finisar strategic marketing chief discusses state of HPC interconnect for data center

Aug. 10, 2009
In a recent Q & A session with CI&M, Finisar's strategic marketing chief Jan Meise held forth on the state of active optical / HPC interconnect technology for the data center.

August 10, 2009 -- At June's International Supercomputing Conference (ISC'09) in Hamburg, Germany, Finisar unveiled its C.wire, a 150 gigabit per second (Gbps) optical link for storage, data center, and high-performance computing connectivity. Based on the CXP form factor, the C.wire active optical cable (AOC) utilizes fiber optic technology to transmit parallel high-speed data in 100+ Gbps applications. In a recent Q & A session with Cabling Installation & Maintenance senior editor Matt Vincent, Finisar's strategic marketing chief Jan Meise held forth on the state of active optical / HPC interconnect technology for the data center.

Cabling Installation & Maintenance (CIM): What's the state of high speed interconnection technology within the data center?

Jan Meise (JM), director of strategic marketing, Finisar: If you look at high-speed interconnects, we're still at the very infancy of what hopefully we're going to see as an exponential curve. 10-Gig is like a sliver compared to anything else that's happening in the data center in terms of connectivity. 100-Mbit/sec and 1-Gbit/sec is still what's predominant. But we do see a lot of momentum trying to break that barrier because there are a lot of chips coming out that need more I/O.

CIM: Hence the trend toward active optical cabling.

JM: What the larger trade press sometimes forgets is that somehow you need to connect your servers to the switch. We're always looking at servers, we're always looking at switches, but we never really look at the interconnects between the two. We have a lot of data coming out of our server chips, and that's where copper is getting shorter and shorter in terms of distances, and where we believe active optical cable is going to have strength, moving into high speed interconnects. That's how we came up with our C.Wire logo copy -- "What good is a fire hydrant without a hose?"

CIM: How do you think the ratio will ultimately break down in terms of volume of copper vs. volume of fiber deployed in the data center?

JM: Some recent analysis from Ovum-RHK, I thought, sort of hit the nail on the head in trying to express how we've been looking at optics and copper in the past, as opposed to how we're looking at that today. The old school attitude was, there was always this battle between RJ-45 vs. optics. At the very beginning, the technology adoption started with optics, then copper basically stole the show. The ratio of 1-Gbit Ethernet optical vs. 1000BaseT deployments now is an estimated 1 to 20.

With the new school of next generation high speed interconnects, we're seeing where customers are looking at universal ports – basically, utilizing connectors that can support not only one but both physical media, where you're able to use copper and optics in the same form factor, which is going to be driving much faster adoption of next generation interconnects -- because the server, switch, and NIC card vendors only need to choose at the beginning what medium they're going to be using. They can make the cards and the switches much earlier, than if they wouldn't have bet on one particular form factor which would only support either one or the other.

CIM: In other words, the copper vs. fiber debate won't be subsiding anytime soon?

JM: We are actually, now, best friends with the copper cable guys. Because we say, everyone has their place in this synergistic play, where the copper cables can go shorter distances and the active optical cables can go longer distances, but they're going to the exact same physical ports on the host board. It's up to the end user to choose what type of cabling they want to use between the two ends. They not only do that when we design the board, but they do it at the point of installation in the data center. It gives everyone much more flexibility than previously.

RELATED CONTENT: View a video interview from CI&M's sister site, Connector Specifier, with Finisar's Jan Meise, where he describes the company's intial serial 10 Gbit/sec active optical cable, the Laserwire.

CIM: Hybrid active optical and copper deployments - are these already happening?

JM: Absolutely. What we've seen in the past -- especially in the Infiniband world -- is that people have been using copper cable for short links at the lowest cost, and active optical cables for longer distances, for architectures where you have challenging cable routes, and challenges with the host board power dissipation, because they're using signal conditioning on their end, or they have thick copper cables which might block the airflow, at which point active optical cables can really enable better architectures. The customer can choose what physical medium they want to use. Some deployments are 100% copper, we've seen some emerging that are 100% active optical.

CIM: Infiniband has been quite the hot industry news topic lately, as far as the data center is concerned.

JM: Quadwire, Finisar's first parallel optics product, has really been ramping. It's really quite interesting from a market momentum perspective. With the Infiniband protocol, we actually see that there is quite a bit of market momentum which seems to be shifting away from DDR [double data rate], or 20-Gig connectivity, towards QDR [quad data rate]. We hear that the conversion from DDR to QDR is happening faster than what we might have thought last year, according to a forecast from IDC.

It's been really great to just observe where we've come from about 2 years ago, when [Finisar] actually wanted to enter the parallel optics market, and we didn't have any parallel optics products. We always had the VCSEL arrays for the merchant market as components, but we never really had our own transceiver, let alone an active optical cable full of parallel optics. So, we thought that this would be the right direction to go, and all indications now are that we bet on the right horse.

CIM: Can you address the concept of Finisar's C.Wire, i.e. the 150-Gbps active optical cable?

JM: We see that there are certain business drivers that really drive the need for active optical cable even beyond 40-Gbit/sec. And with that comes a need for even more powerful processors, for aggregation of either 10-Gbit Ethernet or Infiniband traffic, be it super computing activity, as well as utilizing a connector and a cable for data center connectivity, and being able to drive backplane application interconnects for meshed architectures as well as stacked architectures.

As far as the C.wire brand is concerned, if you look at the logos, the Laserwire had one dot on top of the logo, to signify one full duplex channel. Then, next to the Quadwire logo, we had 4 dots, for 4 full duplex channels. For our C.wire launch, we added a few more dots. We're really touting the idea that it's the first product towards QDR x 12, Infiniband by 12, 12 x 10 Gbit/sec full duplex in this path of a CXP configuration -- that's why we dubbed it "C.wire." You see next to the C.wire logo 12 dots now, because we have 12 wires there. It's also a good name because 'C' stands for the Roman letter for a hundred; and 'C' also is the hex code for the number 12, so it's 12 cables for 100-Gbit Ethernet.

We also have a different form factor called CFP that supports farther reach applications, kilometers, rather than what we're trying to do here with CXP, where we're trying to target the short reach 100-Gbit Ethernet interconnects inside the data center.

CIM: The C.wire press release also mentions proprietary interconnects. What would some examples of these be?

JM: We are under NDA with all of those customers, but what you see is that some people don't really use Infiniband or 100-Gigabit Ethernet as a protocol, but they actually have their own switch fabrics which they're using for very high-speed connectivity. So it's all still in the same realm of HPC / supercomputing arena, where people are trying to connect CPUs all together for massive parallel computing. And that's where we see that people are trying to get not only 10-Gbit/sec out of the channel, but they want to clock it even faster, so they want to do 12.5 Gbit/sec per channel. So, our C.wire product family now has two different variants, one which targets toward 12 x 10-Gbit/sec, and one which is going toward 12 x 12-Gbit/sec, which makes up an aggregate of 150-Gbit/sec.

The news that we've released at ISC is that we're now showing 150 Gbit/sec, so we're not only showing the 10-Gbit/sec, but the full 12.5 Gbit/sec per channel, in this very small form factor. We target production to be in the latter part of the calendar year in Q4 '09. We're really excited about the device, because we've worked very hard in the standardization efforts to make sure that we have a good implementation from a host perspective, i.e. low power of below 3 W, in certain implementations, and from a low cost perspective, obviously. Our ultimate goal is to really help bolster connectivity in the data center via active optical cables, and to really drive this message down to the data center end customer.

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

Revolutionize Your Network with Propel Fiber Modules

Oct. 24, 2023
Propel Fiber Modules are your gateway to the future of connectivity.