REFERENCE GUIDE
ConnectX FDR Infi niBand and 10/40GbE Adapter Cards
Why Mellanox?
Mellanox delivers the industry’s most robust end-to-end
Infi niBand and Ethernet portfolios. Our mature, fi eld-proven
product offerings include solutions for I/O, switching, and
advanced management software making us the only partner
you’ll need for high-performance computing and data center
connectivity. Mellanox’s scale-out FDR 56Gb/s Infi niBand and
10/40GbE products enable users to benefi t from a far more
scalable, lower latency, and virtualized fabric with lower overall
fabric costs and power consumption, greater effi ciencies,
and simplifi ed management providing the best return-on-
investment.
Why FDR 56Gb/s Infi niBand?
Enables the highest performance and lowest latency
– Proven scalability for tens-of-thousands of nodes
– Maximum return-on-investment
Highest effi ciency / maintains balanced system ensuring
highest productivity
– Provides full bandwidth for PCI 3.0 servers
–
Proven in multi-process networking requirements
– Low CPU overhead and high sever utilization
Performance driven architecture
– MPI latency 0.7us, >12GB/s with FDR 56Gb/s Infi niBand (bi-
directional)
– MPI message rate of >90 Million/sec
Superior application performance
– From 30% to over 100% HPC application performance increase
– Doubles the storage throughput, reducing backup time in half
Infi niBand Market Applications
Infi niBand is increasingly becomes an interconnect of choice in not
just high-performance computing environments, but also in main-
stream enterprise grids, data center virtualization solutions, storage,
and embedded environments. The low latency and high-performance
of Infi niBand coupled with the economic benefi ts of its consolida-
tion and virtualization capabilities provides end-customers the ideal
combination as they build out their applications.
Why Mellanox 10/40GbE?
Mellanox’s scale-out 10/40GbE products enable users to benefi t
from a far more scalable, lower latency, and virtualized fabric
with lower overall fabric costs and power consumption, greater
effi ciencies, and simplifi ed management than traditional Ethernet
fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches
and fabric optimization software, a broader array of end-users can
benefi t from a more scalable and high-performance Ethernet fabric.
Mellanox adapter cards are designed to drive the full performance of PCIe 2.0 and 3.0 I/O over high-
speed FDR 56Gb/s Infi niBand and 10/40GbE fabrics. ConnectX Infi niBand and Ethernet adapters lead
the market in performance, throughput, power and lowest latency. ConnectX adapter cards provide
the highest performing and most fl exible interconnect solution for data centers, high-performance
computing, Web 2.0, cloud computing, fi nancial services and embedded environments.
Key Features
– 0.7us application to application latency
– 40 or 56Gb/s Infi niBand ports
– 10 or 40Gb/s Ethernet Ports
– PCI Express 3.0 (up to 8GT/s)
– CPU offl oad of transport operations
– End-to-end QoS & congestion control
– Hardware-based I/O virtualization
– TCP/UDP/IP stateless offl oad
Key Advantages
– World-class cluster performance
– High-performance networking and storage access
– Guaranteed bandwidth & low-latency services
– Reliable transport
– End-to-end storage integrity
– I/O consolidation
– Virtualization acceleration
– Scales to tens-of-thousands of nodes
Mellanox 40 and 56Gb/s Infi niband Infi niBand switches deliver the highest
performance and density with a complete fabric management solution to
enable compute clusters and converged data centers to operate at any scale
while reducing operational costs and infrastructure complexity. Scalable switch
building blocks from 36 to 648 ports in a single enclosure gives IT managers the
fl exibility to build networks up to tens-of-thousands of nodes.
Key Features
– 72.5Tb/s switching capacity
– 100ns to 510ns switching latency
– Hardware-based routing
– Congestion control
– Quality of Service enforcement
– Up to 6 separate subnets
– Temperature sensors and voltage
monitors
Key Advantages
– High-performance fabric for
parallel computation or I/O
convergence
– Wirespeed Infi niBand switch
platform up to 56Gb/s per port
– High-bandwidth, low-latency fabric
for compute-intensive applications
Infi niBand and Ethernet Switches
3
X
®
Sw tch
3828RG Rev 1.0
Mellanox’s scale-out 10 and 40 Gigabit Ethernet switch products offer the industry’s highest density
Ethernet switching products. Offering a full product portfolio of top-of-rack 1U Ethernet switches for 10
or 40Gb/s Ethernet ports to the server or to the next level of switching. These switches enable users to
benefi t from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and
power consumption, greater effi ciencies, and simplifi ed management than traditional Ethernet fabrics.
Key Features
– Up to 36 ports of 40Gb/s non-blocking Ethernet
switching in 1U
– Up to 64 ports of 10Gb/s non-blocking Ethernet
switching in 1U
– 230ns-250ns port to port low latency switching
– Low power
Key Advantages
– Optimal for dealing with data center east-west
traffi c computation or I/O convergence
– Highest switching bandwidth in 1U
– Low OpEx and CapEx and highest ROI