April 3, 2008 -- Mellanox Technologies, Ltd., a supplier of semiconductor-based server and storage interconnect products, has announced the availability of its dual-port ConnectX IB 40 Gbit/sec(QDR) InfiniBand Host Channel Adapter (HCA) for server, storage, and embedded applications. The company contends that the adapter delivers the highest data throughput and lowest latency of any standard PCI Express adapter available, thereby accelerating applications and data transfers in High Performance Computing (HPC) and enterprise data center (EDC) environments.
The company maintains that, with the growing deployment of multiple, multi-core processors in server and storage systems, overall platform efficiency and CPU and memory utilization depends increasingly on interconnect bandwidth and latency. For optimal performance, platforms with several multi-core processors can require interconnect bandwidth of more than 10 Gbit/sec or even 20 Gbit/sec. The company says its ConnectX adapter products deliver 40 Gbit/sec bandwidth and lower latency, helping to ensure that no CPU cycles are wasted due to interconnect bottlenecks. As a result, the ConnectX adapters can help IT managers maximize their return on investment in CPU and memory for server and storage platforms.
The dual port 40 Gbit/secConnectX IB InfiniBand adapters maximize server and storage I/O throughput to enable superior application performance. The products have a PCI Express 2.0 5GT/s (PCIe Gen2) host bus interface complementing the 40 Gbit/sec InfiniBand ports to deliver up to 6460 MB/sec bi-directional MPI application bandwidth over a single port with latencies of less than 1 microsecond. All ConnectX IB products support hardware based virtualization that enable data centers to save power and cost by consolidating slower-speed I/O adapters and associated cabling complexity, notes Mellanox.
"Mellanox continues to lead the HPC and enterprise data center industry with advanced I/O products that deliver unparalleled performance for the most demanding applications," comments Thad Omura, vice president of product marketing at Mellanox Technologies. "We are excited to see efforts to deploy 40 Gbit/sec InfiniBand networks later this year which can leverage the mature InfiniBand software ecosystem established over the last several years at 10 and 20 Gbit/sec speeds."
The ConnectX IB device and adapter cards are available today. The device's compact design and low power requirements makes it well suited for blade server and Landed on Motherboard designs. Adapter cards are available with the established microGiGaCN connector (MHJH29-XTC) as well as the newly adopted QSFP connector (MHQH29-XTC).
Switches from major OEMs supporting 40 Gbit/sec InfiniBand are expected later this year.