Mellanox Announces Availability of ScalableSHMEM 2.0 and ScalableUPC 2.0 for High Performance Computing Applications

Solutions Provide Unprecedented Scalability of SHMEM and PGAS/UPC Applications over InfiniBand

SC11, Seattle, WA. – November 14, 2011 – Mellanox Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of high-performance, end-to-end connectivity solutions for high performance computing, announced today the release of ScalableSHMEM 2.0 and ScalableUPC 2.0 for High Performance Computing applications. These parallel programming interfaces extend Mellanox’s I/O capabilities of low latency, high-throughput, low CPU overhead, Remote Direct Memory Access (RDMA) and advanced collective offloads, into areas of the high-performance computing market that require the unique capabilities, one-sided communication and shared memory semantics of SHMEM and PGAS/UPC programming languages. Used in conjunction with Mellanox CORE-Direct® collective offloads and Mellanox Accelerated Messaging (MXM), ScalableSHMEM and ScalableUPC provide users the highest performance, efficiency and scalability for their parallel applications.

“We are pleased to provide the highest performance capabilities for SHMEM and PGAS-based applications, utilizing our leading I/O solutions and our network offloading technology,” said Gilad Shainer, Senior Director, HPC and Technical Computing at Mellanox Technologies. “The new release is a result of a tight collaboration with Oak Ridge National Laboratory and our OEM partners as part of our mutual plans to provide solutions on the path for Exascale computing.”

“We have been strong proponents of getting PGAS environments such as OpenSHMEM, UPC and Chapel optimized for high-performance InfiniBand networks,” said Stephen Poole, Sr. Technical Director, ESSC and Chief Scientist of CSMD for attribution. “We are pleased to partner and work with Mellanox to increase InfiniBand usage through support of new communication libraries and optimized performance capabilities.”

Mellanox ScalableSHMEM 2.0 is developed in conjunction with the API definition from, and Mellanox ScalableUPC is developed in conjunction with the Berkeley UPC project.

Mellanox ScalableSHMEM 2.0 and ScalableUPC 2.0 are available as individual packages running over the industry-standard OpenFabrics Communication stack.

Supporting Resources:

About Mellanox
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet connectivity solutions and services for servers and storage. Mellanox products optimize data center performance and deliver industry-leading bandwidth, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof architecture. The company offers innovative solutions that address a wide range of markets including HPC, enterprise, mega warehouse data centers, cloud computing, Internet and Web 2.0.
Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam, Israel. For more information, visit Mellanox at

Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. FabricIT, MLNX-OS, and Unbreakable-Link are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.