Mellanox Delivers First 40Gb/s InfiniBand Adapters

Products usher in the next generation of I/O connectivity enhancing data center productivity and efficiency

INTEL DEVELOPER FORUM, SHANGHAI, CHINA – April 1, 2008 Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of semiconductor-based, server and storage interconnect products, today announced the availability of the dual port ConnectX IB 40Gb/s (QDR) InfiniBand Host Channel Adapter (HCA), the industry’s highest performing adapter for server, storage, and embedded applications. The adapter products deliver the highest data throughput and lowest latency of any standard PCI Express adapter available today, thereby accelerating applications and data transfers in High Performance Computing (HPC) and enterprise data center (EDC) environments. According to IDC, the market for total InfiniBand HCAs is expected to increase at a compound annual growth rate (CAGR) of 51.5% to 991,878 in 2011, with a strong ramp for 40Gb/s adapters from 19,182 ports in 2008 to 781,104 ports in 2011.

With the growing deployment of multiple, multi-core processors in server and storage systems, overall platform efficiency and CPU and memory utilization depends increasingly on interconnect bandwidth and latency. For optimal performance, platforms with several multi-core processors can require interconnect bandwidth of more than 10Gb/s or even 20Gb/s. The new ConnectX adapter products deliver 40Gb/s bandwidth and lower latency, helping to ensure that no CPU cycles are wasted due to interconnect bottlenecks. As a result, ConnectX adapters help IT managers maximize their return on investment in CPU and memory for server and storage platforms.

“Mellanox continues to lead the HPC and enterprise data center industry with advanced I/O products that deliver unparalleled performance for the most demanding applications,” said Thad Omura, vice president of product marketing at Mellanox Technologies. “We are excited to see efforts to deploy 40Gb/s InfiniBand networks later this year which can leverage the mature InfiniBand software ecosystem established over the last several years at 10 and 20Gb/s speeds.”

Enterprise vertical applications, such as customer relationship management, database, financial services, insurance services, retail, virtualization, and web services are demanding the leading I/O performance offered by ConnectX adapters to optimize data center productivity. High performance applications such as bioscience and drug research, data mining, digital rendering, electronic design automation, fluid dynamics, and weather analysis are ideal for ConnectX adapters as they require the highest throughput to support the I/O requirements of multiple processes that each require access to large datasets to compute and store results.

“Our strategic partnerships with leading edge companies such as Mellanox enable Amphenol to be on the forefront of this exciting new technology,” said John Majernik, Product Marketing Manager of Amphenol Interconnect Products. “We will be among the first companies to bring QDR technology to a large scale HPC infrastructure where our high speed QDR copper cables will connect with Mellanox ConnectX adapters.”

“Gore’s copper cables meet the stringent demands of 40Gb/s data rates and will satisfy the majority of QDR interconnect requirements in clustered environments,” said Eric Gaver, global business leader for Gore’s high data rate cabling products. “We continue to work with Mellanox to bring to market both passive and low-power active copper technologies which will be essential for cost effective cluster scalability at QDR data-rates.”

“Companies are using servers with multi-core Intel® Xeon® processors to solve very complex problems,” said Jim Pappas, Director of Server Technology Initiatives for Intel’s Digital Enterprise Group. “Intel Connects Cables and high-bandwidth I/O delivered by solutions such as ConnectX via our PCI Express 2.0 servers are key for applications to deliver peak performance in clustered deployments. We also continue to work closely with Mellanox and the industry, on development and testing of our new 40Gb/s optical fiber cable products.”

“Luxtera’s 40Gb/s Optical Active Cable cost effectively extends the reach of QDR InfiniBand, enabling large clusters to be implemented in data center environments,” said Marek Tlalka, vice president of marketing for Luxtera. “We are proud to be working with Mellanox to ensure interoperability.”

“The new Mellanox 40Gb/s InfiniBand adapters address a critical need for faster, low latency bandwidth in rapidly growing cluster-based data centers interconnects,” said Tony Stelliga, CEO of Quellan Inc. “Quellan is pleased to be working with Mellanox and the InfiniBand industry on Active Copper Cabling that will enable this 40Gb/s throughput to run over thinner, lighter, lower power interconnects.”

“The demand for semiconductor and optical connectivity solutions is rapidly growing, especially for modules that can operate under the most intensive conditions at an aggregate bandwidth of 40Gb/s,” said Gary Moskovitz, president and CEO, Reflex Photonics. “Reflex Photonics supports the efforts of companies like Mellanox and we are addressing the market needs for cable solutions that are longer, lighter and less expensive through our InterBOARD™ line of products.”

“QDR InfiniBand solutions further exemplify the need for innovative optical interconnect solutions in HPC and enterprise data centers,” said Dr. Stan Swirhun, senior vice president and general manager of Zarlink's Optical Communications group. “With Zarlink's industry-leading DDR active optical cables ramping in HPC solutions, Zarlink is looking forward to working with industry leaders such as Mellanox to enable 40 Gb/s optical interconnects.”

The dual port 40Gb/s ConnectX IB InfiniBand adapters maximize server and storage I/O throughput to enable the highest application performance. These products have a PCI Express 2.0 5GT/s (PCIe Gen2) host bus interface complementing the 40Gb/s InfiniBand ports to deliver up to 6460 MB/s bi-directional MPI application bandwidth2 over a single port with latencies of less than 1 microsecond. This and all ConnectX IB products support hardware based virtualization that enable data centers to save power and cost by consolidating slower-speed I/O adapters and associated cabling complexity.

The ConnectX IB device and adapter cards are available today. The device’s compact design and low power requirement makes it well suited for blade server and Landed on Motherboard designs (order number MT25408A0-FCC-QI). Adapter cards are available with the established microGiGaCN connector (MHJH29-XTC) as well as the newly adopted QSFP connector (MHQH29-XTC). Switches from major OEMs supporting 40Gb/s InfiniBand are expected later this year.

Visit Mellanox Technologies at the Intel Developer Forum, Shanghai – April 2-3, 2008
Visit the Mellanox booth (#CE006) at IDF to learn more about the benefits of the QDR ConnectX IB adapters in addition to Mellanox’s full line of leading InfiniBand and Ethernet connectivity products.

About Mellanox
Mellanox Technologies is a leading supplier of semiconductor-based, high-performance, InfiniBand and Ethernet connectivity products that facilitate data transmission between servers, communications infrastructure equipment and storage systems. The company’s products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, visit Mellanox at www.mellanox.com.

1. Source: IDC, "Worldwide InfiniBand 2007-2011 Forecast Update," Doc #211375, March 2008
2. Performance data measured with MVAPICH 1.0.0 on the Intel quad-core PCI Express Gen2 platform

Safe Harbor Statement under the Private Securities Litigation Reform Act of 1995:
All statements included or incorporated by reference in this release, other than statements or characterizations of historical fact, are forward-looking statements. These forward-looking statements are based on our current expectations, estimates and projections about our industry and business, management's beliefs and certain assumptions made by us, all of which are subject to change.

Forward-looking statements can often be identified by words such as "anticipates," "expects," "intends," "plans," "predicts," "believes," "seeks," "estimates," "may," "will," "should," "would," "could," "potential," "continue," "ongoing," similar expressions and variations or negatives of these words. These forward-looking statements are not guarantees of future results and are subject to risks, uncertainties and assumptions that could cause our actual results to differ materially and adversely from those expressed in any forward-looking statement.

The risks and uncertainties that could cause our results to differ materially from those expressed or implied by such forward-looking statements include the continued growth rate of the market for InfiniBand HCA ports and the projected growth in demand for 40Gb/s adapters , the continued growth in demand for HPC products, the continued, increased demand for industry standards-based technology, our ability to react to trends and challenges in our business and the markets in which we operate; our ability to anticipate market needs or develop new or enhanced products to meet those needs; the adoption rate of our products; our ability to establish and maintain successful relationships with our OEM partners; our ability to compete in our industry; fluctuations in demand, sales cycles and prices for our products and services; our ability to protect our intellectual property rights; general political, economic and market conditions and events; and other risks and uncertainties described more fully in our documents filed with or furnished to the Securities and Exchange Commission.

More information about the risks, uncertainties and assumptions that may impact our business is set forth in our Form 10-Q filed with the SEC on November 8, 2007, and our Form 10-K filed with the SEC on March 26, 2007, including “Risk Factors”. All forward-looking statements in this press release are based on information available to us as of the date hereof, and we assume no obligation to update these forward-looking statements.

Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

###

 


For more information:
Mellanox Technologies
Brian Sparks
408-970-3400
media@mellanox.com


NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.