Exported onOct/22/2020 07:16 PM
https://docs.mellanox.com/x/NCQuAQ
NVIDIA Mellanox ConnectX-6 InfiniBand/VPI
Adapter Cards User Manual
2
Table of Contents
Table of Contents .......................................................................................................2
Introduction................................................................................................................9
Product Overview .................................................................................................................... 9
ConnectX-6 PCIe x8 Card...................................................................................................... 10
ConnectX-6 PCIe x16 Card.................................................................................................... 11
ConnectX-6 Socket Direct™ Cards....................................................................................... 11
ConnectX-6 Dual-slot Socket Direct Cards (2x PCIe x16) .............................................. 12
ConnectX-6 Single-slot Socket Direct Cards (2x PCIe x8 in a row) ............................... 13
Package Contents ................................................................................................................. 14
ConnectX-6 PCIe x8/x16 Adapter Cards.......................................................................... 14
ConnectX-6 Socket Direct Cards (2x PCIe x16)............................................................... 14
Features and Benefits........................................................................................................... 15
Operating Systems/Distributions .................................................................................... 17
Connectivity ...................................................................................................................... 17
Manageability ................................................................................................................... 17
Interfaces .................................................................................................................18
InfiniBand Interface............................................................................................................... 18
Ethernet QSFP56 Interfaces ................................................................................................. 18
PCI Express Interface ........................................................................................................... 18
LED Interface......................................................................................................................... 18
Heat Sink Interface................................................................................................................ 19
SMBus Interface.................................................................................................................... 20
Voltage Regulators................................................................................................................ 20
Hardware Installation ..............................................................................................21
Safety Warnings..................................................................................................................... 21
Installation Procedure Overview........................................................................................... 21
System Requirements........................................................................................................... 22
Hardware Requirements ................................................................................................. 22
Airflow Requirements ........................................................................................................... 22
Software Requirements ................................................................................................... 23
Safety Precautions ................................................................................................................ 23
Pre-Installation Checklist..................................................................................................... 24
3
Bracket Replacement Instructions ...................................................................................... 24
Installation Instructions........................................................................................................ 25
Cables and Modules......................................................................................................... 25
Identifying the Card in Your System ..................................................................................... 26
ConnectX-6 PCIe x8/16 Installation Instructions................................................................. 27
Installing the Card............................................................................................................ 27
Uninstalling the Card ....................................................................................................... 29
ConnectX-6 Socket Direct (2x PCIe x16) Installation Instructions ...................................... 30
Installing the Card............................................................................................................ 31
Uninstalling the Card ....................................................................................................... 37
Driver Installation ....................................................................................................39
Linux Driver Installation........................................................................................................ 39
Prerequisites .................................................................................................................... 39
Downloading Mellanox OFED .......................................................................................... 39
Installing Mellanox OFED ................................................................................................ 41
Installation Script ........................................................................................................41
Installation Procedure ................................................................................................42
Installation Results .....................................................................................................44
Installation Logs..........................................................................................................44
openibd Script..............................................................................................................44
Driver Load Upon System Boot ..................................................................................45
mlnxofedinstall Return Codes....................................................................................45
Uninstalling MLNX_OFED...........................................................................................46
Installing MLNX_OFED Using YUM ................................................................................. 46
Setting up MLNX_OFED YUM Repository...................................................................46
Installing MLNX_OFED Using the YUM Tool .............................................................. 47
Uninstalling MLNX_OFED Using the YUM Tool .........................................................48
Installing MLNX_OFED Using apt-get Tool..................................................................... 48
Setting up MLNX_OFED apt-get Repository ..............................................................48
Installing MLNX_OFED Using the apt-get Tool..........................................................48
Uninstalling MLNX_OFED Using the apt-get Tool.....................................................49
Updating Firmware After Installation.............................................................................. 49
Updating the Device Online.........................................................................................49
Updating the Device Manually ....................................................................................50
4
Updating the Device Firmware Automatically upon System Boot ............................ 50
UEFI Secure Boot............................................................................................................. 51
Enrolling Mellanox's x.509 Public Key on Your Systems...........................................51
Removing Signature from kernel Modules ................................................................51
Performance Tuning ........................................................................................................ 52
Windows Driver Installation.................................................................................................. 52
Software Requirements ................................................................................................... 53
Downloading Mellanox WinOF-2 Driver .......................................................................... 53
Attended Installation...................................................................................................54
Unattended Installation...............................................................................................58
Installation Results .....................................................................................................59
Uninstalling Mellanox WinOF-2 Driver............................................................................ 60
Attended Uninstallation ..............................................................................................60
Unattended Uninstallation..........................................................................................60
Extracting Files Without Running Installation ................................................................ 60
Firmware Upgrade ........................................................................................................... 63
VMware Driver Installation ................................................................................................... 63
Hardware and Software Requirements........................................................................... 63
Installing Mellanox NATIVE ESXi Driver for VMware vSphere........................................ 64
Removing Earlier Mellanox Drivers................................................................................. 64
Firmware Programming .................................................................................................. 65
Updating Adapter Firmware ....................................................................................66
Troubleshooting .......................................................................................................67
GeneralTroubleshooting ...................................................................................................... 67
LinuxTroubleshooting .......................................................................................................... 67
WindowsTroubleshooting..................................................................................................... 68
Specifications ...........................................................................................................69
MCX651105A-EDAT Specifications ...................................................................................... 69
MCX653105A-HDAT Specifications....................................................................................... 70
MCX653106A-HDAT Specifications....................................................................................... 71
MCX653105A-ECAT Specifications ....................................................................................... 73
MCX653106A-ECAT Specifications ....................................................................................... 74
MCX654105A-HCAT Specifications....................................................................................... 75
MCX654106A-HCAT Specifications....................................................................................... 77
5
MCX654106A-ECAT Specifications ....................................................................................... 78
MCX653105A-EFAT Specifications........................................................................................ 80
MCX653106A-EFAT Specifications........................................................................................ 81
Adapter Card and Bracket Mechanical Drawings and Dimensions.................................... 82
ConnectX-6 PCIe x16 Adapter Card................................................................................ 83
ConnectX-6 PCIe x8 Adapter Card.................................................................................. 83
Auxiliary PCIe Connection Card...................................................................................... 84
Tall Bracket ...................................................................................................................... 84
Short Bracket ................................................................................................................... 84
PCIe Express Pinouts Description for Single-Slot Socket Direct Card....................85
Finding the GUID/MAC on the Adapter Card ............................................................86
Document Revision History ......................................................................................88
6
About This Manual
This User Manual describes NVIDIA® Mellanox® ConnectX®-6 InfiniBand/VPI adapter cards. It
provides details as to the interfaces of the board, specifications, required software and firmware for
operating the board, and relevant documentation.
Ordering Part Numbers
The table below provides the ordering part numbers (OPN) for the available ConnectX-6 InfiniBand/VPI
adapter cards.
OPN Marketing Description
MCX654106A-ECAT ConnectX®-6 VPI adapter card, 100Gb/s (HDR100, EDR
InfiniBand and 100GbE), dual-port QSFP56, Socket Direct 2x
PCIe 3.0/4.0 x16, tall bracket
MCX653105A-EFAT ConnectX®-6 VPI adapter card, 100Gb/s (HDR100, EDR IB and
100GbE), single-port QSFP56, PCIe3.0/4.0 Socket Direct 2x8 in
a row, tall bracket
MCX653106A-EFAT ConnectX®-6 VPI adapter card, 100Gb/s (HDR100, EDR
IBand100GbE), dual-port QSFP56, PCIe3.0/4.0 Socket Direct
2x8 in a row, tall bracket
MCX651105A-EDAT ConnectX®-6 VPI adapter card, 100Gb/s (HDR100, EDR IB and
100GbE, single-port QSFP56, PCIe4.0 x8, tall bracket
MCX653105A-ECAT ConnectX®-6 VPI adapter card, 100Gb/s (HDR100, EDR IB and
100GbE), single-port QSFP56, PCIe3.0/4.0 x16, tall bracket
MCX653106A-ECAT ConnectX®-6 VPI adapter card, 100Gb/s (HDR100, EDR IB and
100GbE), dual-port QSFP56, PCIe3.0/4.0 x16, tall bracket
MCX653105A-HDAT ConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE,
single-port QSFP56, PCIe3.0/4.0 x16, tall bracket
MCX653106A-HDAT ConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE,
dual-port QSFP56, PCIe3.0/4.0 x16, tall bracket
MCX654105A-HCAT ConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE,
single-port QSFP56, Socket Direct 2x PCIe3.0/4.0x16, tall
bracket
MCX654106A-HCAT ConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE,
dual-port QSFP56, Socket Direct 2x PCIe3.0/4.0x16, tall
bracket
Intended Audience
This manual is intended for the installer and user of these cards.
The manual assumes basic familiarity with InfiniBand and Ethernet network and architecture
specifications.
Technical Support
Customers who purchased Mellanox products directly from Mellanox are invited to contact usthrough
the following methods:
URL:http://www.mellanox.com> Support
7
Tel: +1.408.916.0055
Customers who purchased Mellanox M-1 Global Support Services, please see your contract fordetails
regarding Technical Support.
Customers who purchased Mellanox products through a Mellanox approved reseller should first
seekassistance through their reseller.
Related Documentation
Mellanox OFED
for Linux User
Manual and
Release Notes
User Manual describing OFED features, performance, band diagnostic, tools content and
configuration. SeeMellanox OFED for Linux Documentation.
WinOF-2 for
WindowsUser
Manual and
Release Notes
User Manual describing WinOF-2 features, performance, Ethernet diagnostic, tools
content and configuration. SeeWinOF-2 for Windows Documentation.
Mellanox
VMware for
Ethernet User
Manual
User Manual describing the various components of the Mellanox ConnectX® NATIVE
ESXi stack. See http://www.mellanox.comProducts > Software > Ethernet Drivers >
VMware Driver > User Manual
Mellanox
VMware for
Ethernet Release
Notes
Release notes for Mellanox ConnectX® NATIVE ESXi stack. See http://
www.mellanox.comSoftware > Ethernet Drivers > VMware Driver > Release Notes
Mellanox
Firmware Utility
(mlxup) User
Manual and
Release Notes
Mellanox firmware update and query utility used to update the firmware. Seehttp://
www.mellanox.com> Products > Software > Firmware Tools > mlxup Firmware Utility
Mellanox
Firmware Tools
(MFT) User
Manual
User Manual describing the set of MFT firmware management tools for a single node.
SeeMFT User Manual.
IEEE Std 802.3
Specification
IEEE Ethernet specification at http://standards.ieee.org
PCI Express
3.0/4.0
Specifications
Industry Standard PCI Express Base and Card Electromechanical Specifications at
https://pcisig.com/specifications
8
Mellanox LinkX
Interconnect
Solutions
Mellanox LinkX InfiniBand cables and transceivers are designed to maximize the
performance of High-Performance Computing networks, requiring high-bandwidth, low-
latency connections between compute nodes and switch nodes. Mellanox offers one of
industry’s broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100
(100Gb/s) and HDR (200Gb/s) cables, including Direct Attach Copper cables (DACs),
copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of
lengths from 0.5m to 10km. In addition to meeting IBTA standards,Mellanox tests every
product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15.Read
more at https://www.mellanox.com/products/interconnect/infiniband-overview.php
Document Conventions
When discussing memory sizes, MB and MBytes are used in this document to mean size in mega
Bytes. The use of Mb or Mbits (small b) indicates size in mega bits.IB is used in this document to mean
InfiniBand. In this document PCIe is used to mean PCI Express.
Revision History
A list of the changes made to this document are provided inDocument Revision History.
9
Introduction
Product Overview
This is the user guide for Mellanox technologies VPI adapter cards based on the ConnectX®-6
integrated circuit device. ConnectX-6 connectivity provides the highest performing low latency and
most flexible interconnect solution for PCI Express Gen 3.0/4.0 servers used in enterprise datacenters
and high-performance computing environments.
ConnectX-6 Virtual Protocol Interconnect® adapter cards provide up to two ports of 200Gb/s for
InfiniBand and Ethernet connectivity, sub-600ns latency and 200 million messages per second,
enabling the highest performance and most flexible solution for the most demanding High-
Performance Computing (HPC), storage, and datacenter applications.
ConnectX-6 is a groundbreaking addition to the Mellanox ConnectX series of industry-leading adapter
cards. In addition to all the existing innovative features of past ConnectX versions, ConnectX-6 offers a
number of enhancements that further improve the performance and scalability of datacenter
applications.
ConnectX-6 adapter cards are offered in a variety of PCIe configurations, as described in the below
table.
Configuration OPN Marketing Description
ConnectX-6 PCIe x8 Card MCX651105A-EDAT ConnectX®-6 VPI adapter card, 100Gb/s
(HDR100, EDR IB and 100GbE, single-port
QSFP56, PCIe4.0 x8, tall bracket
ConnectX-6 PCIe x16 Card
MCX653105A-HDAT ConnectX®-6 VPI adapter card, HDR IB
(200Gb/s) and 200GbE, single-port QSFP56,
PCIe4.0 x16, tall bracket
MCX653106A-HDAT ConnectX®-6 VPI adapter card, HDR IB
(200Gb/s) and 200GbE, dual-port QSFP56,
PCIe3.0/4.0 x16, tall bracket
MCX653105A-ECAT ConnectX®-6 VPI adapter card, 100Gb/s
(HDR100, EDR IB and 100GbE), single-port
QSFP56, PCIe3.0/4.0 x16, tall bracket
MCX653106A-ECAT ConnectX®-6 VPI adapter card, 100Gb/s
(HDR100, EDR IB and 100GbE), dual-port
QSFP56, PCIe3.0/4.0 x16, tall bracket
ConnectX-6 Dual-slot Socket Direct
Cards (2x PCIe x16)
MCX654105A-HCAT ConnectX®-6 VPI adapter card kit, HDR IB
(200Gb/s) and 200GbE, single-port QSFP56,
Socket Direct 2x PCIe3.0 x16, tall brackets
MCX654106A-HCAT ConnectX®-6 VPI adapter card, HDR IB
(200Gb/s) and 200GbE, dual-port QSFP56,
Socket Direct 2x PCIe3.0/4.0x16, tall bracket
Make sure to use a PCIe slot that is capable of supplying the required power and airflow to the
ConnectX-6 as stated in Specifications.
10
Configuration OPN Marketing Description
MCX654106A-ECAT ConnectX®-6 VPI adapter card, 100Gb/s
(HDR100, EDR InfiniBand and 100GbE), dual-
port QSFP56, Socket Direct 2x PCIe3.0/4.0
x16, tall bracket
ConnectX-6 Single-slot Socket Direct
Cards (2x PCIe x8 in a row)
MCX653105A-EFAT ConnectX®-6 VPI adapter card, 100Gb/s
(HDR100, EDR IB and 100GbE), single-port
QSFP56, PCIe3.0/4.0 Socket Direct 2x8 in a
row, tall bracket
MCX653106A-EFAT ConnectX®-6 VPI adapter card, 100Gb/s
(HDR100, EDR IBand100GbE), dual-port
QSFP56, PCIe3.0/4.0 Socket Direct 2x8 in a
row, tall bracket
ConnectX-6 PCIe x8 Card
ConnectX-6 with a single PCIe x8 slot can support a bandwidth of up to 100Gb/s in a PCIe Gen 4.0 slot.
Part Number MCX651105A-EDAT
Form Factor/Dimensions PCIe Half Height, Half Length / 167.65mm x 68.90mm
Data Transmission Rate Ethernet: 10/25/40/50/100 Gb/s
InfiniBand: SDR, DDR, QDR, FDR, EDR, HDR100
Network Connector Type Single-port QSFP56
PCIe x8 through Edge Connector PCIe Gen 3.0 / 4.0 SERDES @ 8.0GT/s / 16.0GT/s
RoHS RoHS Compliant
Adapter IC Part Number MT28908A0-XCCF-HVM
11
ConnectX-6 PCIe x16 Card
ConnectX-6 with a single PCIe x16 slot can support a bandwidth of up to 100Gb/s in a PCIe Gen 3.0 slot,
or up to 200Gb/s in a PCIe Gen 4.0 slot.
Part Number MCX653105A-
ECAT
MCX653106A-
ECAT
MCX653105A-
HDAT
MCX653106A-
HDAT
Form Factor/Dimensions PCIe Half Height, Half Length / 167.65mm x 68.90mm
Data Transmission Rate Ethernet: 10/25/40/50/100 Gb/s
InfiniBand: SDR, DDR, QDR, FDR,
EDR, HDR100
Ethernet: 10/25/40/50/100/200 Gb/s
InfiniBand: SDR, DDR, QDR, FDR,
EDR, HDR100, HDR
Network Connector Type Single-port
QSFP56
Dual-port
QSFP56
Single-port
QSFP56
Dual-port QSFP56
PCIe x16 through Edge Connector PCIe Gen 3.0 / 4.0 SERDES @ 8.0GT/s / 16.0GT/s
RoHS RoHS Compliant
Adapter IC Part Number MT28908A0-XCCF-HVM
ConnectX-6 Socket Direct™ Cards
The Socket Direct technology offers improved performance to dual-socket servers by enabling
direct access from each CPU in a dual-socket server to the network through its dedicated PCIe
interface.
Please note thatConnectX-6 Socket Direct cards do not support Multi-Host functionality (i.e.
connectivity to two independent CPUs). For ConnectX-6 Socket Direct card with Multi-Host
functionality, please contact Mellanox.
12
ConnectX-6 Socket Direct cards are available in two configurations: Dual-slot Configuration (2x PCIe
x16) and Single-slot Configuration (2x PCIe x8).
ConnectX-6 Dual-slot Socket Direct Cards (2x PCIe x16)
In order to obtain 200Gb/s speed, Mellanox offers ConnectX-6 Socket Direct that enable 200Gb/s
connectivity also for servers with PCIe Gen 3.0 capability. The adapter’s 32-lane PCIe bus is split into
two 16-lane buses, with one bus accessible through a PCIe x16 edge connector and the other bus
through an x16 Auxiliary PCIe Connection card. The two cards should be installed into two PCIe x16
slots and connected using two Cabline SA-II Plus harnesses, as shown in the below figure.
Part Number MCX654105A-
HCAT
MCX654106A-
HCAT
MCX654106A-ECAT
Form Factor/Dimensions Adapter Card: PCIe Half Height, Half Length / 167.65mm x 68.90mm
Auxiliary PCIe Connection Card: 5.09 in. x 2.32 in. (129.30mm x 59.00mm)
Two 35cm Cabline CA-II Plus harnesses
Data Transmission Rate Ethernet: 10/25/40/50/100/200
Gb/s
InfiniBand: SDR, DDR, QDR, FDR,
EDR, HDR100, HDR
Ethernet: 10/25/40/50/100 Gb/s
InfiniBand: SDR, DDR, QDR, FDR, EDR,
HDR100
Network Connector Type Single-port
QSFP56
Dual-port QSFP56
PCIe x16 through Edge Connector PCIe Gen 3.0 / 4.0SERDES@ 8.0GT/s / 16.0GT/s
PCIe x16 through Auxiliary Card PCIe Gen 3.0SERDES@ 8.0GT/s
RoHS RoHS Compliant
Adapter IC Part Number MT28908A0-XCCF-HVM
13
ConnectX-6 Single-slot Socket Direct Cards (2x PCIe x8 in a
row)
The PCIe x16 interface comprises two PCIe x8 in a row, such that each of the PCIe x8 lanes can be
connected to a dedicated CPU in a dual-socket server. In such a configuration, Socket Direct brings
lower latency and lower CPU utilization as the direct connection from each CPU to the network means
the interconnect can bypass a QPI (UPI) and the other CPU, optimizing performance and improving
latency. CPU utilization is improved as each CPU handles only its own traffic and not traffic from the
other CPU.
A system with a custom PCI Express x16 slot that includes special signals is required for installing the
card. Please refer toPCIe Express Pinouts Description for Single-Slot Socket Direct Card
for pinout
definitions.
Part Number MCX653105A-EFAT MCX653106A-EFAT
Form Factor/Dimensions PCIe Half Height, Half Length / 167.65mm x 68.90mm
Data Transmission Rate Ethernet: 10/25/40/50/100 Gb/s
InfiniBand: SDR, DDR, QDR, FDR, EDR, HDR100
Network Connector Type Single-port QSFP56 Dual-port QSFP56
PCIe x16 through Edge Connector PCIe Gen 3.0 / 4.0 SERDES @ 8.0GT/s / 16.0GT/s Socket Direct 2x8 in a row
RoHS RoHS Compliant
Adapter IC Part Number MT28908A0-XCCF-HVM
14
Package Contents
ConnectX-6 PCIe x8/x16 Adapter Cards
Category Qty Item
Cards 1 ConnectX-6 adapter card
Accessories 1 Adapter card short bracket
1 Adapter card tall bracket (shipped assembled on the
card)
ConnectX-6 Socket Direct Cards (2x PCIe x16)
Category Qty. Item
Cards 1 ConnectX-6 adapter card
1 PCIe Auxiliary Card
Harnesses 1 35cm Cabline CA-II Plus harness (white)
1 35cm Cabline CA-II Plus harness (black)
Accessories
2 Retention Clip for Cablline harness (optional accessory)
1 Adapter card short bracket
1 Adapter card tall bracket (shipped assembled on the
card)
1 PCIe Auxiliary card short bracket
1 PCIe Auxiliary card tall bracket (shipped assembled on
the card)
Applies to MCX651105A-EDAT, MCX653105A-ECAT, MCX653106A-ECAT, MCX653105A-HDAT,
MCX653106A-HDAT, MCX653105A-EFAT, MCX653106A-EFAT.
Applies to MCX654105A-HCAT, MCX654106A-HCAT and MCX654106A-ECAT.
15
Features and Benefits
PCI
Express
(PCIe)
Uses the following PCIe interfaces:
PCIe x8/x16 configurations:
PCIe Gen 3.0 (8GT/s) and Gen 4.0 (16GT/s) through an x8/x16 edge connector.
2x PCIe x16 configurations:
PCIe Gen 3.0/4.0 SERDES @ 8.0/16.0 GT/s through Edge Connector
PCIe Gen 3.0 SERDES @ 8.0GT/s through PCIe Auxiliary Connection Card
200Gb/s
Virtual
Protocol
Interconne
ct (VPI)
Adapter
ConnectX-6 offers the highest throughput VPI adapter, supporting HDR 200b/s InfiniBand and
200Gb/s Ethernet and enabling any standard networking, clustering, or storage to operate
seamlessly over any converged network leveraging a consolidated software stack.
InfiniBand
Architectu
re
Specificati
on v1.3
compliant
ConnectX-6 delivers low latency, high bandwidth, and computing efficiency for performance-driven
server and storage clustering applications. ConnectX-6 is InfiniBand Architecture Specification
v1.3 compliant.
Up to 200
Gigabit
Ethernet
Mellanox adapters comply with the following IEEE 802.3 standards:
200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
- IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
- IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes
- IEEE 802.3ba 40 Gigabit Ethernet
- IEEE 802.3by 25 Gigabit Ethernet
- IEEE 802.3ae 10 Gigabit Ethernet
- IEEE 802.3ap based auto-negotiation and KR startup
- IEEE 802.3ad, 802.1AX Link Aggregation
- IEEE 802.1Q, 802.1P VLAN tags and priority
- IEEE 802.1Qau (QCN)
- Congestion Notification
- IEEE 802.1Qaz (ETS)
- IEEE 802.1Qbb (PFC)
- IEEE 802.1Qbg
- IEEE 1588v2
- Jumbo frame support (9.6KB)
InfiniBand
HDR100
A standard InfiniBand data rate, where each lane of a 2X port runs a bit rate of 53.125Gb/s with a
64b/66b encoding, resulting in an effective bandwidth of 100Gb/s.
InfiniBand
HDR
A standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 53.125Gb/s with a
64b/66b encoding, resulting in an effective bandwidth of 200Gb/s.
Memory
Componen
ts
SPI Quad - includes 256Mbit SPI Quad Flash device (MX25L25645GXDI-08G device by
Macronix)
FRU EEPROM - Stores the parameters and personality of the card. The EEPROM capacity is
128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe SMBus.
Make sure to use a PCIe slot that is capable of supplying the required power and airflow to the
ConnectX-6 cards as stated in Specifications.
16
Overlay
Networks
In order to better scale their networks, datacenter operators often create overlay networks that
carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as
NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the
hardware offloading engines, placing higher loads on the host CPU. ConnectX-6 effectively
addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that
encapsulate and de-capsulate the overlay protocol.
RDMA and
RDMA over
Converged
Ethernet
(RoCE)
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged
Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet
networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-6 advanced
congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services
over Layer 2 and Layer 3 networks.
Mellanox
PeerDirect
PeerDirect™ communication provides high efficiency RDMA access by eliminating unnecessary
internal data copies between components on the PCIe bus (for example, from GPU to CPU), and
therefore significantly reduces application run time. ConnectX-6 advanced acceleration technology
enables higher cluster efficiency and scalability to tens of thousands of nodes.
CPU
Offload
Adapter functionality enabling reduced CPU overhead allowing more available CPU for
computation tasks.
Flexible match-action flow tables
Open VSwitch (OVS) offload using ASAP2(TM)
Tunneling encapsulation / decapsulation
Quality of
Service
(QoS)
Support for port-based Quality of Service enabling various application requirements for latency
and SLA.
Hardware-
based I/O
Virtualizati
on
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and protection for
virtual machines within the server.
Storage
Accelerati
on
A consolidated compute and storage network achieves significant cost-performance advantages
over multi-fabric networks. Standard block and file access protocols can leverage:
RDMA for high-performance storage access
NVMe over Fabric offloads for target machine
Erasure Coding
T10-DIF Signature Handover
SR-IOV ConnectX-6 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines (VM) within the server.
High-
Performan
ce
Accelerati
ons
Tag Matching and Rendezvous Offloads
Adaptive Routing on Reliable Transport
Burst Buffer Offloads for Background Checkpointing
17
Operating Systems/Distributions
OpenFabrics Enterprise Distribution (OFED)
RHEL/CentOS
Windows
FreeBSD
VMware
OpenFabrics Enterprise Distribution (OFED)
OpenFabrics Windows Distribution (WinOF-2)
Connectivity
Interoperable with 1/10/25/40/50/100/200 Gb/s InfiniBand/VPI and Ethernet switches
Passive copper cable with ESD protection
Powered connectors for optical and active cable support
Manageability
ConnectX-6 technology maintains support for manageability through a BMC. ConnectX-6 PCIe stand-up
adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a
standard Mellanox PCIe stand-up adapter. For configuring the adapter for the specific manageability
solution in use by the server, please contact Mellanox Support.
ConnectX-6 Socket Direct cards 2x PCIe x16 (OPNs: MCX654105A-HCAT, MCX654106A-HCAT
and MCX654106A-ECAT) are not supported in Windows and WinOF-2.
18
Interfaces
InfiniBand Interface
The network ports of the ConnectX®-6 adapter cards are compliant with the
InfiniBand Architecture
Specification, Release 1.3.
InfiniBand traffic is transmitted through the cards' QSFP56 connectors.
Ethernet QSFP56 Interfaces
The network ports of the ConnectX-6 adapter card are compliant with the IEEE 802.3 Ethernet
standards listed in Features and Benefits. Ethernet traffic is transmitted through the QSFP56
connectors on the adaptercard.
PCI Express Interface
ConnectX®-6 adapter cards support PCI Express Gen 3.0/4.0 (1.1 and 2.0 compatible) through x8/x16
edge connectors. The device can be either a master initiating the PCI Express bus operations, or a
slave responding to PCI bus operations.
The following lists PCIe interface features:
PCIe Gen 3.0 and 4.0 compliant, 2.0 and 1.1 compatible
2.5, 5.0, 8.0, or 16.0 GT/s link rate x16/x32
Auto-negotiates to x32, x16, x8, x4, x2, or x1
Support for MSI/MSI-X mechanisms
LED Interface
There are two I/O LEDs per port:
LED 1 and 2: Bi-color I/O LED which indicates link status. LED behavior is described below for
Ethernet and InfiniBand port configurations.
The adapter card includes special circuits to protect from ESD shocks to the card/server when
plugging copper cables.
The adapter card includes special circuits to protect from ESD shocks to the card/server when
plugging copper cables.
19
LED 3 and 4: Reserved for future use.
LED1 and LED2 Link Status Indications (Physical and Logical) - Ethernet Protocole:
LED Color and State Description
Off A link has not been established
Blinking amber 1 Hz Blinking amber occurs due to running a beacon command
for locating the adapter card
4 Hz blinking amber indicates a problem with the physical link
Solid green Indicates a valid link with no active traffic
Blinking green Indicates a valid logical link with active traffic
LED1 and LED2Link Status Indications(Physical and Logical) - InfiniBand Protocole:
LED Color and State Description
Off A physical link has not been established
Solid amber Indicates an active physical link
Blinking amber 1 Hz Blinking amber occurs due to running a beacon command
for locating the adapter card
4 Hz blinking amber indicates a problem with the physical link
Solid green Indicates a valid logical (data activity) link with no active traffic
Blinking green Indicates a valid logical link with active traffic
Heat Sink Interface
The heatsink is attached to the ConnectX-6 IC in order to dissipate the heat from the ConnectX-6 IC. It
is attached either by using four spring-loaded push pins that insert into four mounting holes, or by
screws.
ConnectX-6 IC has a thermal shutdown safety mechanism which automatically shuts down the
ConnectX-6 card in cases of high-temperature event, improper thermal coupling or heatsink removal.
For the required airflow (LFM) per OPN, please refer toSpecifications.
20
SMBus Interface
ConnectX-6 technology maintains support for manageability through a BMC. ConnectX-6 PCIe stand-up
adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a
standard Mellanox PCIe stand-up adapter. For configuring the adapter for the specific manageability
solution in use by the server, please contact Mellanox Support.
Voltage Regulators
The voltage regulator power is derived from the PCI Express edge connector 12V supply pins. These
voltage supply pins feed on-board regulators that provide the necessary power to the various
components on the card.
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68
  • Page 69 69
  • Page 70 70
  • Page 71 71
  • Page 72 72
  • Page 73 73
  • Page 74 74
  • Page 75 75
  • Page 76 76
  • Page 77 77
  • Page 78 78
  • Page 79 79
  • Page 80 80
  • Page 81 81
  • Page 82 82
  • Page 83 83
  • Page 84 84
  • Page 85 85
  • Page 86 86
  • Page 87 87
  • Page 88 88
  • Page 89 89
  • Page 90 90

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI