Mellanox 20Gb/s InfiniBand Delivers World-Class Network File System Performance

New MTD2000 product development kit delivers 1400 megabytes per second of networked file system I/O performance over a single link

STORAGE NETWORKING WORLD – ORLANDO, FL. – October 31, 2006 –– Mellanox™ Technologies, Ltd., a leading supplier of semiconductor-based, high-performance interconnect products, today announced that 20Gb/s InfiniBand with NFS-RDMA open-source solutions delivers over ten times the maximum theoretical performance of NFS file systems using a Gigabit Ethernet connection. Mellanox demonstrated this performance with client machines interconnected with 20Gb/s InfiniBand to the new Mellanox MTD2000 NFS-RDMA Server Product Development Kit (PDK) featuring a RAID-5 back-end storage subsystem with SAS (Serial Attached SCSI) hard disk drives.

“Mellanox 20Gb/s InfiniBand and remote direct memory access (RDMA) opens up the performance bottleneck of networked file system I/O for applications such as high-performance computing, EDA, financial, backup, disaster recovery, and clustered database solutions,” said Sujal Das, director of software product management for Mellanox Technologies. “We believe the MTD2000 PDK will reduce development costs, accelerate time to market and enable OEMs to provide cost-effective NFS-RDMA storage systems based on software available from the open source community.”

Outstanding Performance
With two NFS-RDMA clients interconnect to the MTD2000 through 20Gb/s InfiniBand interconnect and reading from the file server cache, 1400MB/s (Megabytes per second) of throughput was measured using IOZone. When actual write performance to disk was measured, nearly 400MB/s of throughput was demonstrated with RAID-5, which can be increased by cascading additional SAS hard disks. This measured performance significantly exceeds the maximum theoretical performance over a Gigabit Ethernet link of 125MB/s which is commonly used for networked file storage connectivity today.

MTD2000 PDK Contents (View Product Brief PDF)
The PDK includes the MTD2000 Filer Unit, the MTD2000E Extender Storage Chassis (optional) for higher read/write disk I/O performance, associated documentation, and a comprehensive software development kit (SDK).

The MTD2000 Filer Unit is a 3U, 16-drive, 576GB high-performance storage unit based on industry-standard components including a dual-core Intel® Woodcrest-based motherboard, Mellanox InfiniBand 20Gb/s HCA adapter card, LSI Logic MegaRAID SAS 8480E adapter, and SAS disk drives. The MTD2000E Extender Storage Chassis and SAS cascading cable can be used to increase the number of disk drives up to 126. The SDK includes open-source NFS-RDMA server and client implementations, and RPMs tested and hardened for the MTD2000 platform. It also includes a user guide, benchmark charts, QA test report, and optional software support terms and options.

The NFS-RDMA client software is interoperable and supported with OpenFabrics Enterprise Distribution (OFED) version 1.1 ( and supports major Linux operating system distributions. The NFS-RDMA server software, which is ready for installation on the MTD2000, is supported on Novell SLES 10 and is compatible and interoperable with OFED 1.1.

The Mellanox MTD2000 and MTD2000E components of the PDK are available today and can be used with open source NFSoRDMA software. The SDK and associated support is scheduled to be available in December 2006.

Come Visit Mellanox at Storage Networking World (SNW) October 31 through November 3 in Orlando, Florida
Mellanox will be exhibiting at SNW in Booth #PP26 during the following Exhibit Hall hours: Wednesday, November 1 from 5:30pm to 8:30pm, and Thursday, November 2 from 12:00pm to 2:00pm, and 4:00pm until 7:15pm.

About Mellanox

Mellanox Technologies is a leading supplier of semiconductor-based, high-performance interconnect products that facilitate data transmission between servers and storage systems through communications infrastructure equipment. Our products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems. Based on InfiniBand technology, our field-proven adapter and switch integrated circuits deliver industry-leading performance and capabilities, and serve as the building blocks for creating reliable and scalable interconnect solutions.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, please visit

Mellanox is a registered trademark of Mellanox Technologies, Inc. and InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are trademarks of Mellanox Technologies, Inc. All other trademarks are property of their respective owners.

For more information:
Mellanox Technologies, Inc.
Brian Sparks

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.