Mellanox Scalable HPC Solutions with NVIDIA GPUDirect Technology Enhance GPU-Based HPC Performance and Efficiency

Mellanox ConnectX-2 and NVIDIA Tesla Products with NVIDIA GPUDirect Have Been Field Proven to Accelerate System Communication by 30 Percent

SUNNYVALE, Calif. and YOKNEAM, Israel – May 27, 2010 – Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of high-performance, end-to-end connectivity solutions for data center servers and storage systems, today announced the immediate availability of NVIDIA GPUDirect™ technology with Mellanox ConnectX®-2 40Gb/s InfiniBand adapters that boosts GPU-based cluster efficiency and increases performance by an order of magnitude over today’s fastest high-performance computing clusters.

Today’s current architecture requires the CPU to handle memory copied between the GPU and the InfiniBand network. Mellanox was the lead partner in the development of NVIDIA GPUDirect, a technology that reduces the involvement of the CPU, reducing latency for GPU-to-InfiniBand communication by up to 30 percent. This communication time speedup can potentially add up to a gain of over 40 percent in application productivity when a large number of jobs are run on a server cluster. NVIDIA GPUDirect technology with Mellanox scalable HPC solutions is in use today in multiple HPC centers around the world, providing leading engineering and scientific application performance acceleration.

“As the popularity of GPU-based computing continues to increase, the importance of NVIDIA GPUDirect together with Mellanox’s offloading-based InfiniBand technology is critical to our world-leading HPC systems,” said Dr. HUO Zhigang, The National Research Center for Intelligent Computing Systems (NCIC). “We have implemented NVIDIA GPUDirect technology with Mellanox ConnectX-2 InfiniBand adapters and Tesla GPUs and have seen the immediate performance advantages that it brings to our high-performance applications. Mellanox offloading technology is an essential component in this overall solution as it brings out the real capability to avoid the CPU for the GPU-to-CPU communications. ”

“The rapid increase in the performance of GPUs has made them a compelling platform for computationally-demanding tasks in a wide variety of application domains,” said Michael Kagan, CTO at Mellanox Technologies. “To ensure high levels of performance, efficiency and scalability, data communication must be performed as fast as possible, and without creating extra load on the CPUs. NVIDIA GPUDirect technology enables NVIDIA GPUs, coupled with Mellanox ConnectX-2 40Gb/s InfiniBand adapters, to communicate faster, increasing overall system performance and efficiency.”

GPU-based clusters are being used to perform compute-intensive tasks, like finite element computations, computational fluids dynamics, Monte-Carlo simulations, etc. Supercomputing centers are beginning to deploy GPUs in order to achieve new levels of performance. Since GPUs provide high core count and floating-point operation capabilities, high-speed InfiniBand networking is required to connect between the platforms in order to provide high throughput and the lowest latency for GPU-to-GPU communications. Mellanox ConnectX-2 adapters are the world’s only InfiniBand solutions that provide full offloading capabilities that are critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters. Combined with the availability of NVIDIA GPUDirect and CORE-Direct™ technologies, Mellanox InfiniBand solutions are driving HPC to new performance levels.

“The combination of Mellanox 40Gb/s InfiniBand interconnects and GPU computing opens up a new world of possibilities to accelerate science and engineering research,” said Andy Keane, general manager, Tesla business at NVIDIA. “Products coming to market employing the technologies of both NVIDIA and Mellanox help set the stage for next-generation, GPU-based clusters that are both high performance and highly efficient.”

Supporting Resources:

About Mellanox
Mellanox Technologies is a leading supplier of end-to-end connectivity solutions for servers and storage that optimize data center performance. Mellanox products deliver market-leading bandwidth, performance, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof solution. For the best in performance and scalability, Mellanox is the choice for Fortune 500 data centers and the world’s most powerful supercomputers. Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam,Israel. For more information, visit Mellanox at www.mellanox.com.

Mellanox, BridgeX, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, PhyX and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. CORE-Direct and FabricIT are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

###

Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.