Mellanox InfiniBand Interconnect Drives Clustered Supercomputers to Desk Environments at Supercomputing 2005

Affordable InfiniBand Delivers Breakthrough Personal Supercomputer Performance

SUPERCOMPUTING 2005, SEATTLE, WASHINGTON – NOVEMBER 15, 2005 – Mellanox™ Technologies Ltd, the leader in business and technical computing interconnects, announced that InfiniBand provides engineers, scientists, financial analysts, and movie editors with a cost-effective interconnect solution for supercomputing performance in a small form factor. Defined as Personal Supercomputing (PSC), a turnkey cluster of four to eight servers certified for computation applications and targeted at an individual or small user group, is now available with 10Gb/s or 20Gb/s, low-latency InfiniBand interconnect.

“On standard Linpack benchmarks, InfiniBand enables 30 percent higher gigaflop performance compared to Gigabit Ethernet when used as the interconnect for a four-node PSC cluster, while adding only six percent to the total system price,” said Thad Omura, vice president of product marketing for Mellanox Technologies. “Even more impressive, on actual application benchmarks such as Fluent, a widely used fluid dynamics supercomputing application, up to 50 percent additional performance can be achieved using InfiniBand instead of Gigabit Ethernet.”

Supercomputing Performance in a Small Package

InfiniBand’s strong momentum for usage in large-node count, high-performance compute clusters, has expanded into enterprise grids and data centers. Due to its unparalleled price/performance, InfiniBand is now migrating down to mainstream personal use. New 10Gb/s eight-port and 24-port InfiniBand switch solutions drive the switch-per-port price below the $100 barrier. These affordable InfiniBand switches, combined with Mellanox’s low-priced InfiniBand adapters, provides cost-effective interconnect for personal supercomputers. To complement the affordability of InfiniBand, server vendors are driving down the size of clustered computing form factors equivalent to that of a desktop machine.

“When we designed our PSC solution, we needed to pay attention to the personal environmental limitations,” said Patrick Scateni, vice president of sales and business development at Ciara Technologies. “Personal supercomputers offered by Ciara addresses the needs of low-noise, low-power consumption and small dimensions while providing world-class, supercomputing performance that InfiniBand is proven to deliver.”

"Intel processor-based platforms continue to be the HPC platform of choice for the majority of users, and when clustered with InfiniBand, they deliver industry leading efficiency and performance per dollar," said Jim Pappas, director of technology initiatives, Server Platforms Group, Intel Corporation.

The Simplicity of the Personal Supercomputer

Engineers, scientists, financial analysts, and movie editors require supercomputers that are easy to use. Microsoft®’s answer to supercomputing ease-of-use is the introduction of Windows® Compute Cluster Server 2003. Compatible with InfiniBand interconnect, Windows Compute Cluster Server 2003 is designed to accelerate time-to-insight by providing an HPC platform that is simple to deploy, operate, and integrate with existing infrastructure, tools, and applications.

“As the world’s major provider of Computational Fluid Dynamics software used for simulation, visualization, and analysis of fluid and heat flow, Fluent is excited to see PSC platforms that drive our technology to the desktops of scientists and engineers,” said Barbara Hutchings, Director of Strategic Partnerships at Fluent. “The ease-of-use of Windows Compute Cluster Server 2003 and the performance of InfiniBand interconnect makes PSC a perfect match for our software.”

“Our finite element analysis (FEA) software, LS-DYNA, widely used in the automotive industry in the areas of crashworthiness and metal forming, is most suitable for PSC InfiniBand clusters,” said Dr. Wayne Mindle, Sr. Engineer at Livermore Software Technology. “The use of simple, manageable, high-performance personal supercomputers will reduce both time and expense, resulting in getting improved solutions to market faster.”

From Personal to Departmental

The compute intensive PSC clusters can be interconnected together to create a unified departmental supercomputing cluster using the already available personal space, without the needs of special dedicated real estate, cooling, power, or noise isolation. Servers interconnected in a departmental InfiniBand cluster deliver 50 percent higher processing efficiency over Gigabit Ethernet, and the gap widens as the cluster size increases. When compared to Gigabit Ethernet, the exceptional scalability of InfiniBand results in each gigaflop of processing performance costing 20 percent less in PSC environments, and 40 percent less in departmental clustered environments.

Come Visit Mellanox at Supercomputing 2005, Booth #902

Personal Supercomputer solutions will be on display in Mellanox’s booth at SC|05 through November 17. Three different vendors will demonstrate “Mellanox InfiniBand Accelerated” Personal Supercomputers that utilize the price/performance benefits of InfiniBand and the ease-of-use advantages of Windows Compute Cluster Server 2003. In addition, the cost-effective eight-port InfiniBand switch, available from Flextronics, will be on display.

About InfiniBand®

InfiniBand is an industry-standard interconnect technology defined by the InfiniBand Trade Association for the purpose of delivering exceptional I/O fabric performance demanded by data centers, high-performance computing, and embedded environments. Today, InfiniBand solutions provide high-bandwidth, low-latency 10 and 20Gb/s server-to-server and server-to-storage connections, and 30Gb/s and 60Gb/s switch-to-switch connections with a defined roadmap to 120Gb/s performance. InfiniBand architecture standardizes remote-direct-memory-access (RDMA) and 100% reliable transport—hallmark capabilities for the industry’s lowest latency, highest bandwidth network or fabric available today.

With the InfiniBand industry delivering affordable server, storage, and network platforms, open-source, interoperable software stacks, cluster, grid, storage and virtualization solutions, InfiniBand is the optimal fabric technology for world-class computing.

About Mellanox

Mellanox’s field-proven offering of interconnect solutions for communications, storage and compute clustering are changing the shape of both business and technical computing. As the leader of industry-standard, InfiniBand-based silicon and card-based solutions, Mellanox is the driving force behind the most cost-effective, highest performance interconnect solutions available. For more information, visit

Product and company names herein may be trademarks of their registered owners

For more information:
Mellanox Technologies, Inc.
Thad Omura, Vice President of Product Marketing

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.