SUNNYVALE, CA -- The HyperTransport Consortium has unveiled a new design platform based on HyperTransport technology that the industry group claims has the potential to dramatically lower cost and energy consumption in the data center.
Called HyperShare, the new platform provides a framework for system and chip designers to combine and leverage the best of interconnect and switch fabric technologies -- including HyperTransport, InfiniBand, Ethernet and PCI Express -- in ways that make all network resources hardware-virtualized and sharable among nodes.
RELATED STORY:HyperTransport sees bad economy driving HPC market
As a result, HyperTransport claims data centers and clusters can approach 100 percent resource efficiency, avoiding the hardware redundancy and memory over-provisioning that is dramatically driving up data center cost and power consumption.
The consortium also announced that member companies are developing new chips, systems and IP cores based on HyperShare. Details of the platform were unveiled at the Server Design Summit held this week at the Santa Clara Convention Center in Santa Clara, Calif.
"Because conventional networks do not allow the sharing of resources -- such as memory, accelerators, storage and remote PCI Express interfaces -- among nodes, each network node must be resource-provisioned for peak performance, resulting in underutilization, as well as cost and power inefficiency," says Mario Cavalli, general manager of the HyperTransport Consortium.
Cavalli continues, "The non-coherent global shared memory addressing at the core of the HyperShare platform, enables nodes to share compute functions and resources, and also allows switchless network fabrics to be built using open-standard, off-the-shelf technology for the first time. In addition, new packet encapsulation specifications bring global memory addressing capabilities to existing Ethernet and InfiniBand networks, creating an opportunity to streamline and function-optimize computing nodes in data center and high performance computing cluster infrastructures."
The HyperTransport Consortium estimates that migration from Gigabit Ethernet (GbE) to 10 GbE will increase network infrastructure cost by 8x and power consumption by as much as 5x, with just a 22 percent latency performance improvement. In addition, carbon footprint taxation -- already enacted in countries like the U.K. and China, and proposed in the U.S. -- has the potential to double the operating costs of large scale data centers, according to energy experts. By significantly streamlining data center hardware infrastructures, HyperShare gives data center managers and CIOs a powerful platform for overcoming escalating power and cost hurdles.
Non-Coherency Combines Unlimited Scalability with Software Transparency
HyperShare is the first open-standard, non-coherent global shared memory addressing platform featuring operating system (OS) and software application transparency that rivals cache-coherent architectures, while overcoming the scalability limitations of coherent solutions.
The HyperShare platform consists of the following:
-- High Node Count (HNC) Specification: HNC defines HyperTransport protocol extensions that deliver non-coherent global shared memory addressing and boot-level memory mapping set-up. HNC can be applied in both message-passing interface (MPI) and non-MPI platforms. In a HyperShare environment, the OS resides in each network node, but the cluster operates as it would with a single, shared OS, thereby ensuring that the cluster continues to operate regardless of node failures. This is in contrast to cache-coherent architectures, where the OS resides on a single master node, thus providing a single point of cluster failure. In contrast to coherent architectures, non-coherent architectures also offer virtually unlimited scalability.
-- Ethernet and InfiniBand HyperTransport Encapsulation Specifications: Two new specifications define encapsulation methods for transmitting the HyperTransport protocol over Ethernet and the HyperTransport protocol over InfiniBand switch fabrics. When encapsulated, the 10GbE or InfiniBand packet acts as the transport shell at the physical layer, while HyperShare takes over as the de-facto network protocol, resulting in the ability to bring the cost and power efficiency benefits of global shared memory addressing and cluster resource sharing capabilities to cluster infrastructures that have been unable to support them at the hardware level.
-- HyperTransport Connector Specifications: These specifications add a physical layer complement to the HNC specification, and define compact, high-performance connectors for use with specially designed high-performance twinax cables. The specifications define a right-angle female cable connector, male cable connector and mezzanine connector, allowing the implementation of HyperShare-native switchless network fabrics based on Torus topologies. The solution delivers a combination of node scalability, minimized implementation costs, power efficiency and low latency performance not possible with conventional switched networks.
Standards-based Solutions for Cost-effective Torus Networks
When deployed natively with a Torus network topology, HyperShare can dramatically reduce network hardware equipment and management costs. While popular in high performance computing (HPC) applications, Torus topologies have not enjoyed traction in the commercial data center space because they have traditionally been implemented with proprietary, costly application-specific integrated circuit (ASIC) technology. By defining hardware interconnection and protocol processing, the HyperShare platform provides a comprehensive, standards-based, off-the-shelf approach to deploying scalable and efficient Torus networks. Torus architectures also allow network switch functionality to be embedded directly into the network interface controller, eliminating costly and power-hungry external spine and leaf switches, as well as their accompanying rack chassis and cooling systems.
The HyperTransport Consortium estimates that native HyperShare Torus networks can be deployed at one-quarter the cost and power consumption of 10GbE networks, and one-half the cost and one-third the power of InfiniBand networks. In addition, HyperShare-native Torus clusters deliver superior latency performance, providing one-third the average node-to-node latency of today's best-in-class 10GbE networks.
HyperShare Solutions in Development
A variety of solutions leveraging the HyperShare platform are already in development. EB Engineering, a design services company based in Italy and member of the HyperTransport Consortium, has developed the VirtualShare Cluster Resource Engine, which implements the full resource addressing and sharing of HyperShare in a highly compact, PCI Express- and FPGA-based network interface card for 2D and 3D Torus clusters. The solution features integrated network switching and special embedded HyperShare and Torus latency performance acceleration engines. EB Engineering also offers a variety of HyperShare IP cores that offer semiconductor, system and subsystem manufacturers an ideal opportunity for bringing HyperShare capability to market in a timely and cost effective manner, without the need for in-house HyperShare expertise.
"Data center owners need solutions that allow them to expand their infrastructure and meet rising computational needs, without exceeding strict power and cost requirements," said Emilio Billi, CEO, EB Engineering. "HyperTransport, and the new HyperShare platform, gives us the industry's first open, standards-based approach that can scale with data center requirements, while at the same time ensuring 100 percent capacity utilization and lowering total cost of ownership."
The HNC, Ethernet encapsulation and connector specifications are all available now for licensing from the HyperTransport Consortium. The InfiniBand encapsulation specification is expected to be finalized and available in the first quarter of 2011.
To learn more, go to www.hypertransport.org.