Aruba JL852AAE, IMC Orchestrator 6.3 Underlay Network, JL849AAE, JL850AAE, JL851AAE, JL853AAE Configuration Guide

  • Hello! I am an AI chatbot trained to assist you with the Aruba JL852AAE Configuration Guide. I’ve already reviewed the document and can help you find the information you need or explain it in simple terms. Just ask your questions, and providing more details will help me assist you more effectively!
HPE IMC Orchestrator 6.3 Underlay Network
Configuration Guide
The information in this document is subject to change without notice.
© Copyright 2023 Hewlett Packard Enterprise Development LP
i
Contents
Overview ························································································1
Restrictions and guidelines ·································································2
Management network configuration ······················································2
Network configuration ··················································································································· 2
Configuration example ·················································································································· 3
Manual underlay network deployment ···················································5
Network plan ······························································································································· 5
Network plan for spine-border integration scenario ······································································ 5
Network plan for spine-border separation scenario ······································································ 7
Deployment workflow ·················································································································· 10
Deployment workflow for spine-border integration scenario ························································· 10
Deployment workflow for spine-border separation scenario ························································· 13
Procedure (spine-border integration scenario) ················································································· 15
Configure Spine Border 1 ······································································································ 15
Configure Spine Border 2 ······································································································ 26
Configure Server Leaf 1 ········································································································ 37
Configure Server Leaf 2 ········································································································ 48
Configure Service Leaf 1 ······································································································ 59
Configure Service Leaf 2 ······································································································ 69
Procedure (spine-border separation scenario) ················································································· 79
Configure Spine 1················································································································ 79
Configure Spine 2················································································································ 86
Configure Server Leaf 1 ········································································································ 92
Configure Server Leaf 2 ······································································································ 103
Configure Service Leaf 1 ···································································································· 114
Configure Service Leaf 2 ···································································································· 124
Configure Border 1 ············································································································ 134
Configure Border 2 ············································································································ 144
Automatic underlay network deployment ··········································· 157
Network plan ··························································································································· 157
Networking diagram and IP address plan ··············································································· 157
IP address pool plan ·········································································································· 160
Deployment workflow ················································································································ 160
Traditional automatic deployment workflow ············································································ 160
Wizard-based automatic deployment workflow ········································································ 161
Traditional automatic deployment procedure ················································································· 161
Configure IMC Orchestrator basic settings ············································································· 162
Configure automatic deployment ·························································································· 163
Configure automatic device deployment ················································································ 173
Verify the configuration ······································································································· 176
Wizard-based automatic deployment procedure ············································································· 176
Configure IMC Orchestrator basic settings ············································································· 177
Configure automatic deployment ·························································································· 177
Configure device information ······························································································· 183
Check device deployment ··································································································· 186
Configure automatic device deployment ················································································ 187
Verify the configuration ······································································································· 190
Scale up the network ········································································································· 191
Configure link expansion ····································································································· 195
Underlay network deployment in four-in-one scenario ·························· 198
Automatic underlay configuration ································································································ 198
Manual underlay configuration ···································································································· 198
ii
Network plan ···················································································································· 198
Deployment workflow ········································································································· 200
Configure four-in-one device 1 ····························································································· 202
Configure four-in-one device 2 ····························································································· 210
Configure Server Leaf 1 ······································································································ 219
Configure Server Leaf 2 ······································································································ 228
Manual underlay network deployment in five-in-one scenario ················· 237
Network plan ··························································································································· 237
Deployment workflow ················································································································ 238
Configure five-in-one device 1 ···································································································· 239
Reboot the device without a startup configuration file ······························································· 239
Configure hardware resource parameters ·············································································· 239
Configure the management network······················································································ 240
Configure basic settings for L2VPN and VXLAN ······································································ 241
Configure the underlay routing protocol ················································································· 241
Configure the VTEP address ······························································································· 241
Configure the overlay BGP ·································································································· 241
Enable the VTEP service ···································································································· 241
Configure the global MAC address ······················································································· 241
Configure the DRNI physical address ···················································································· 242
Configure the DRNI virtual address ······················································································· 242
Configure DR system parameters ························································································· 242
Configure the DRNI IPP aggregate interface ·········································································· 242
Configure DRNI MAD ········································································································· 243
Configure the IPL escape channel ························································································ 243
Configure the IPL bypass channel ························································································ 244
Configure default VXLAN decapsulation ················································································ 244
Configure automatic recovery time of DRNI device after startup ················································· 244
Configure the DR interface connected to the external LACP aggregate link ·································· 244
Configure the DR interface connected to Server 1 LACP aggregate link ······································ 244
Configure the physical interface connected to Server 2 primary and secondary links ······················ 245
Save the configuration ········································································································ 245
Incorporate devices on the controller ····················································································· 246
Configure five-in-one device 2 ···································································································· 248
Reboot the device without a startup configuration file ······························································· 248
Configure hardware resource parameters ·············································································· 249
Configure the management network······················································································ 249
Configure basic settings for L2VPN and VXLAN ······································································ 250
Configure the underlay routing protocol ················································································· 250
Configure the VTEP address ······························································································· 250
Configure the overlay BGP ·································································································· 250
Enable the VTEP service ···································································································· 251
Configure the global MAC address ······················································································· 251
Configure the DRNI physical address ···················································································· 251
Configure the DRNI virtual address ······················································································· 251
Configure DRNI system parameters ······················································································ 251
Configure the DRNI IPP aggregate interface ·········································································· 252
Configure DRNI MAD ········································································································· 252
Configure the IPL escape channel ························································································ 253
Configure the IPL bypass channel ························································································ 253
Configure default VXLAN decapsulation ················································································ 253
Configure automatic recovery time of DRNI device after startup ················································· 253
Configure the DR interface connected to the external LACP aggregate link ·································· 253
Configure the DR interface connected to Server 1 LACP aggregate link ······································ 254
Configure the physical interface connected to Server 2 primary and secondary links ······················ 254
Save the configuration ········································································································ 255
Incorporate devices on the controller ····················································································· 255
(Optional) Harden network security ·················································· 260
Configure packet suppression ···································································································· 260
Configure ARP attack protection ································································································· 260
iii
Configure BPDU guard ············································································································· 260
O&M monitoring ··········································································· 261
1
Overview
The underlay network of the data center is a physical network that carries overlay services and
contains switches such as spine and leaf devices. To deploy overlay services, incorporate underlay
network devices on the data center controller first. This document describes how to incorporate
underlay network devices on the controller.
There are two methods to incorporate underlay network devices on the controller:
• Manual underlay network deployment—This method requires manual pre-configuration on
the devices. After pre-configuration, the underlay network devices can be incorporated by the
controller.
• Automatic underlay network deployment—This method does not require pre-configuration
of underlay devices. You only need to configure templates on the controller and start the
devices without a startup configuration file. The devices can be automatically incorporated by
the controller.
As a best practice, use DRNI to deploy the underlay network for the IMC Orchestrator solution.
2
Restrictions and guidelines
• The addressing plans in this document are for reference only. Before you deploy an underlay
network, create an addressing plan for that network.
• For the ease of fabric expansion, use a routed Layer 3 management network.
• When you use DRNI to deploy the underlay network, follow these guidelines:
ï‚¡ The DR member devices in a DR system must use the same DR system MAC address.
Make sure the DR system MAC address is unique across the whole network.
As a best practice, use the bridge MAC address of one DR member device in the DR
system as the DR system MAC address.
ï‚¡ The DR member devices in a DR system must use different DR system numbers.
For example, set the system number to 1 for DR member device 1 and to 2 for DR member
device 2.
ï‚¡ The DR system priority for the DR member devices in a DR system must be the same.
• You must enable the allowlist function and configure the device list for auto device deployment
in DRNI scenario. Only the devices added to the device list can be automatically deployed and
be incorporated by the controller.
• To avoid DRNI dual-active issues and ensure high availability when both the DRNI keepalive
link and the IPL are down, configure a DRNI MAD DOWN action as follows:
ï‚¡ In an aggregation scenario, enable DRNI standalone mode for the DR member devices:
<Sysname> system-view
[Sysname] drni standalone enable
ï‚¡ In a primary/secondary scenario, enable DRNI MAD DOWN state persistence for the DR
member devices:
<Sysname> system-view
[Sysname] drni mad persistent
• After you enable DRNI MAD DOWN state persistence, a DRNI dual-active issue might still
occur if the keepalive link goes down earlier than the IPL does. In this situation, contact
Technical Support.
• NOTE: The default VXLAN ID range for a switch is 0 to 16777215. If you use the l2vpn drni
peer-link ac-match-rule vxlan-mapping command on a switch in the DMI scenario,
the IDs of the VXLANs created on the switch cannot exceed 16000000. #
Management network configuration
Network configuration
In the data center network, a separate switch is usually used to connect devices and the
management network of the IMC Orchestrator controller. Such a switch is called a management
switch. The management switch requires manual configuration and is not incorporated by the IMC
Orchestrator controller.
The management network of the data center can adopt either Layer 2 networking or Layer 3
networking. For Layer 2 networking, the management network of physical devices and the IMC
Orchestrator management network are located in the same network segment. For Layer 3
networking, the management networks are located in different segments.
Layer 2 networking applies to single-fabric networks. The management network of multi-fabric
networks must use the Layer 3 networking mode. For Layer 3 networking, you need to configure
3
VLANs for the fabric on the management switch, and manually configure the gateway and DHCP
relay agent commands.
As a best practice to facilitate future fabric expansion, use Layer 3 networking in a single-fabric
network. This section takes the deployment of a management network with Layer 3 networking in a
multi-fabric scenario as an example. The typical network diagram is shown in 0.
Note: To perform automatic underlay network deployment, make sure the IP version of the
management network is the same as that of the underlay network (IPv4 or IPv6). #
Management network diagram
Configuration example
In a multi-fabric network, the interfaces that connect the management switch to the devices in
different fabrics must belong to different VLANs. As shown in 0, the interface connected to the
controller management network belongs to VLAN 10, the one connected to Fabric 1 device
management network belongs to VLAN 20, and the one connected to Fabric 2 device management
network belongs to VLAN 30. In addition, you must configure the gateway address of the physical
management network corresponding to the fabric under the VLAN interfaces.
Perform the following tasks on the management switch:
1. Create the VLANs for the controller management network, Fabric 1 device management
network, and Fabric 2 device management network. In this example, the VLAN IDs are 10, 20,
and 30, respectively.
[device] vlan 10
[device-vlan10] quit
[device] vlan 20
[device-vlan20] quit
[device] vlan 30
[device-vlan30] quit
2. Assign the interfaces that connect the management switch to the devices in Fabric 1 to VLAN
20. This section uses the interface Ten-GigabitEthernet 1/0/33 as an example.
Controller
Related components
Management switch
DHCP Relay
VLAN 10
Border1 Border2
Leaf1 Leaf2 Leaf3 Leaf4
Server1 Server2
peer link
peer link peer link
Spine1 Spine2
Fabric1
Border1 Border2
Leaf3 Leaf4Leaf1 Leaf2
Server3 Server4
peer link
peer link peer link
Spine1 Spine2
Fabric2
VLAN 20 VLAN 30
Server3 Server4 Server1 Server2
Management
network(VLAN 10)
Management network
(VLAN 20)
Management
network(VLAN 30)
Management network link description
L3 Network
4
[device] interface Ten-GigabitEthernet1/0/33
[device-Ten-GigabitEthernet1/0/33] port link-mode bridge
[device-Ten-GigabitEthernet1/0/33] port access vlan 20
[device-Ten-GigabitEthernet1/0/33] quit
3. Assign the interfaces that connect the management switch to the devices in Fabric 2 to VLAN
30. This section uses the interface Ten-GigabitEthernet1/0/26 as an example.
[device] interface Ten-GigabitEthernet1/0/26
[device-Ten-GigabitEthernet1/0/26] port link-mode bridge
[device-Ten-GigabitEthernet1/0/26] port access vlan 30
[device-Ten-GigabitEthernet1/0/26] quit
4. Configure the VLAN interface of the controller management network.
[device] interface Vlan-interface10
[device-Vlan-interface10] ip address 192.168.10.1 255.255.255.0
[device-Vlan-interface10] ip address 192.168.12.1 255.255.255.0 sub
[device-Vlan-interface10] quit
5. Configure the VLAN interface of the Fabric1 management network.
[device] interface Vlan-interface20
[device-Vlan-interface20] ip address 192.168.11.1 255.255.255.0
[device-Vlan-interface20] quit
6. Enable DHCP.
Perform this task only when automated deployment is used.
[device] dhcp enable
7. Configure the DHCP relay agent, and specify the controller cluster IP address as the relay
server IP address.
Perform this task only when automated deployment is used.
[device] interface Vlan-interface20
[device-Vlan-interface20] dhcp select relay
[device-Vlan-interface20] dhcp relay server-address 192.168.12.101
[device-Vlan-interface20] quit
8. Configure the VLAN interface of the Fabric 2 management network.
[device] interface Vlan-interface30
[device-Vlan-interface30] ip address 192.168.21.1 255.255.255.0
[device-Vlan-interface30] quit
9. Enable DHCP.
Perform this task only when automated deployment is used.
[device] dhcp enable
10. Configure the DHCP relay agent for the Fabric 2 management network, and specify the
controller cluster IP address as the relay server IP address.
Perform this task only when automated deployment is used.
[device] interface Vlan-interface30
[device-Vlan-interface30] dhcp select relay
[device-Vlan-interface30] dhcp relay server-address 192.168.12.101
[device-Vlan-interface30] quit
5
Manual underlay network deployment
Network plan
Network plan for spine-border integration scenario
In this scenario, the spine and border are integrated as one device, and the two spine-border
devices form a DR system, as shown in 0.
Network diagram for underlay spine-border integration scenario
IP address and interface description for spine-border integration scenario
Device
IP address planning
Interfaces
Spine Border 1
Management IP address:
192.168.11.2/24
Gateway: 192.168.11.1
HGE 4/0/1 (connecting to HGE 4/0/1
on Spine Border 2)
HGE 4/0/2 (connecting to HGE 4/0/2
on Spine Border 2)
XGE 6/0/48 (connecting to XGE
6/0/48 on Spine Border 2)
XGE 6/0/5 (connecting to external
network device)
HGE 1/0/5 (connecting to HGE
1/0/25 on Server Leaf 1)
HGE 1/0/6 (connecting to HGE
1/0/25 on Server Leaf 2)
VTEP address: 10.1.1.2/32
DRNI virtual address: 10.20.1.2/32
DRNI system MAC address:
0002-0003-0001 (or use the bridge
MAC of the device)
DRNI MAD address: 10.10.1.1/30
DRNI IPL escape address:
10.30.1.1/30
Internet
Spine-
Border1 Spine-
Border2
Server-
Leaf1 Server-
Leaf2 Service-
Leaf1 Service-
Leaf2
Server1 Server2
peer link
peer link peer link
Primary
Backup
Controller
Related components
Third-party
firewall
6
Device
IP address planning
Interfaces
HGE 1/0/7 (connecting to HGE
1/0/27 on Service Leaf 1)
HGE 1/0/8 (connecting to HGE
1/0/27 on Service Leaf 2)
Spine Border 2
Management IP address:
192.168.11.3/24
Gateway: 192.168.11.1
HGE 4/0/1 (connecting to HGE 4/0/1
on Spine Border 1)
HGE 4/0/2 (connecting to HGE 4/0/2
on Spine Border 1)
XGE 6/0/48 (connecting to XGE
6/0/48 on Spine Border 1)
XGE 6/0/5 (connecting to external
network device)
HGE 1/0/5 (connecting to HGE
1/0/27 on Server Leaf 1)
HGE 1/0/6 (connecting to HGE
1/0/27 on Server Leaf 2)
HGE 1/0/7 (connecting to HGE
1/0/25 on Service Leaf 1)
HGE 1/0/8 (connecting to HGE
1/0/25 on Service Leaf 2)
VTEP address: 10.1.1.3/32
DRNI virtual address: 10.20.1.2/32
DRNI system MAC address:
0002-0003-0001 (or use the bridge
MAC of the device)
DRNI MAD address: 10.10.1.2/30
DRNI IPL escape address:
10.30.1.2/30
Server Leaf 1
Management IP address:
192.168.11.4/24
Gateway: 192.168.11.1
XGE 1/0/9 (connecting to XGE 1/0/9
on Server Leaf 2)
XGE 1/0/10 (connecting to XGE
1/0/10 on Server Leaf 2)
XGE 1/0/11 (connecting to Server 1)
XGE 1/0/12 (connecting to Server 2)
HGE 1/0/30 (connecting to HGE
1/0/30 on Server Leaf 2)
HGE 1/0/25 (connecting to HGE
1/0/5 on Spine Border 1)
HGE 1/0/27 (connecting to HGE
1/0/5 on Spine Border 2)
VTEP address: 10.1.1.4/32
DRNI virtual address: 10.20.1.4/32
DRNI system MAC address:
0002-0003-0002 (or use the bridge
MAC of the device)
DRNI MAD address: 10.10.1.5/30
DRNI IPL escape address:
10.30.1.5/30
Server Leaf 2
Management IP address:
192.168.11.5/24
Gateway: 192.168.11.1
XGE 1/0/9 (connecting to XGE 1/0/9
on Server Leaf 1)
XGE 1/0/10 (connecting to XGE
1/0/10 on Server Leaf 1)
XGE 1/0/11 (connecting to Server 1)
XGE 1/0/12 (connecting to Server 2)
HGE 1/0/30 (connecting to HGE
1/0/30 on Server Leaf 1)
HGE 1/0/25 (connecting to HGE
1/0/6 on Spine Border 1)
HGE 1/0/27 (connecting to HGE
1/0/6 on Spine Border 2)
VTEP address: 10.1.1.5/32
DRNI virtual address: 10.20.1.4/32
DRNI system MAC address:
0002-0003-0002 (or use the bridge
MAC of the device)
DRNI MAD address: 10.10.1.6/30
DRNI IPL escape address:
10.30.1.6/30
Service Leaf 1
Management IP address:
192.168.11.6/24
Gateway: 192.168.11.1
XGE 1/0/9 (connecting to XGE 1/0/9
on Service Leaf 2)
XGE 1/0/10 (connecting to XGE
7
Device
IP address planning
Interfaces
VTEP address: 10.1.1.6/32
1/0/10 on Service Leaf 2)
HGE 1/0/30 (connecting to HGE
1/0/30 on Service Leaf 2)
HGE 1/0/25 (connecting to HGE
1/0/7 on Spine Border 1)
HGE 1/0/27 (connecting to HGE
1/0/7 on Spine Border 2)
DRNI virtual address: 10.20.1.6/32
DRNI system MAC address:
0002-0003-0003 (or use the bridge
MAC of the device)
DRNI MAD address: 10.10.1.9/30
DRNI IPL escape address:
10.30.1.9/30
Service Leaf 2
Management IP address:
192.168.11.7/24
Gateway: 192.168.11.1
XGE 1/0/9 (connecting to XGE 1/0/9
on Service Leaf 1)
XGE 1/0/10 (connecting to XGE
1/0/10 on Service Leaf 1)
HGE 1/0/30 (connecting to HGE
1/0/30 on Service Leaf 1)
HGE 1/0/25 (connecting to HGE
1/0/8 on Spine Border 1)
HGE 1/0/27 (connecting to HGE
1/0/8 on Spine Border 2)
VTEP address: 10.1.1.7/32
DRNI virtual address: 10.20.1.6/32
DRNI system MAC address:
0002-0003-0003 (or use the bridge
MAC of this device)
DRNI MAD address: 10.10.1.10/30
DRNI IPL escape address:
10.30.1.10/30
Network plan for spine-border separation scenario
Network diagram for underlay spine-border separation scenario
8
IP address and interface description for spine-border separation scenario
Device
Interfaces
Border 1
HGE 4/0/1 (connecting to HGE
4/0/1 on Border 2)
HGE 4/0/2 (connecting to HGE
4/0/2 on Border 2)
XGE 6/0/48 (connecting to XGE
6/0/48 on Border 2)
HGE 4/0/3 (connecting to HGE
1/0/3 on Spine 1)
HGE 4/0/4 (connecting to HGE
1/0/3 on Spine 2)
XGE 6/0/5 (connecting to external
network device)
Border 2
HGE 4/0/1 (connecting to HGE
4/0/1 on Border 1)
HGE 4/0/2 (connecting to HGE
4/0/2 on Border 1)
XGE 6/0/48 (connecting to XGE
6/0/48 on Border 1)
HGE 4/0/3 (connecting to HGE
Internet
Border1 Border2
Server-
Leaf1 Server-
Leaf2 Service-
Leaf1 Service-
Leaf2
Server1 Server2
peer link
peer link peer link
Spine1 Spine2
Controller
Related components
Primary
Backup
Third-party
firewall
9
Device
Interfaces
1/0/4 on Spine 1)
HGE 4/0/4 (connecting to HGE
1/0/4 on Spine 2)
XGE 6/0/5 (connecting to external
network device)
Spine 1
HGE 1/0/3 (connecting to HGE
4/0/3 on Border 1)
HGE 1/0/4 (connecting to HGE
4/0/3 on Border 2)
HGE 1/0/5 (connecting to HGE
1/0/25 on Leaf 1)
HGE 1/0/6 (connecting to HGE
1/0/25 on Leaf 2)
HGE 1/0/7 (connecting to HGE
1/0/27 on Leaf 3)
HGE 1/0/8 (connecting to HGE
1/0/27 on Leaf 4)
Spine 2
HGE 1/0/3 (connecting to HGE
4/0/4 on Border 1)
HGE 1/0/4 (connecting to HGE
4/0/4 on Border 2)
HGE 1/0/5 (connecting to HGE
1/0/27 on Leaf 1)
HGE 1/0/6 (connecting to HGE
1/0/27 on Leaf 2)
HGE 1/0/7 (connecting to HGE
1/0/25 on Leaf 3)
HGE 1/0/8 (connecting to HGE
1/0/25 on Leaf 4)
Server Leaf 1
XGE 1/0/9 (connecting to XGE
1/0/9 on Server Leaf 2)
XGE 1/0/10 (connecting to XGE
1/0/10 on Server Leaf 2)
XGE 1/0/11 (connecting to Server
1)
XGE 1/0/12 (connecting to Server
2)
HGE 1/0/30 (connecting to HGE
1/0/30 on Server Leaf 2)
HGE 1/0/25 (connecting to HGE
1/0/5 on Spine 1)
HGE 1/0/27 (connecting to HGE
1/0/5 on Spine 2)
Server Leaf 2
XGE 1/0/9 (connecting to XGE
1/0/9 on Server Leaf 1)
XGE 1/0/10 (connecting to XGE
1/0/10 on Server Leaf 1)
XGE 1/0/11 (connecting to Server
10
Device
Interfaces
1)
XGE 1/0/12 (connecting to Server
2)
HGE 1/0/30 (connecting to HGE
1/0/30 on Server Leaf 1)
HGE 1/0/25 (connecting to HGE
1/0/6 on Spine 1)
HGE 1/0/27 (connecting to HGE
1/0/6 on Spine 2)
Service Leaf 1
XGE 1/0/9 (connecting to XGE
1/0/9 on Service Leaf 2)
XGE 1/0/10 (connecting to XGE
1/0/10 on Service Leaf 2)
HGE 1/0/30 (connecting to HGE
1/0/30 on Service Leaf 2)
HGE 1/0/25 (connecting to HGE
1/0/7 on Spine 1)
HGE 1/0/27 (connecting to HGE
1/0/7 on Spine 2)
Service Leaf 2
XGE 1/0/9 (connecting to XGE
1/0/9 on Service Leaf 1)
XGE 1/0/10 (connecting to XGE
1/0/10 on Service Leaf 1)
HGE 1/0/30 (connecting to HGE
1/0/30 on Service Leaf 1)
HGE 1/0/25 (connecting to HGE
1/0/8 on Spine 1)
HGE 1/0/27 (connecting to HGE
1/0/8 on Spine 2)
Deployment workflow
Deployment workflow for spine-border integration scenario
Spine border deployment workflow
Spine border deployment workflow
11
Server leaf deployment workflow
Server leaf deployment workflow
Underlay
basic settings
Start M-LAG
configuration
M-LAG interface
configuration
Reboot the device
without a startup
configuration file
Configure
hardware resource
parameters
Configure the
management
network
Configure basic
settings for L2VPN
and VXLAN
Configure interfaces
connected to the
leaf
Configure the
underlay routing
protocol
Configure the VTEP
address
Enable the OVSDB
VTEP service
Configure the
overlay BGP
Configure the global
MAC address
Configure the M-
LAG physical
address
End
Configure the M-
LAG virtual address
Configure M-LAG
System parameters
Configure the M-
LAG peer link
aggregate interface
Configure M-LAG
MAD
Configure the peer
link failover
channel
Configure the peer
link bypass channel
Configure VXLAN
default
decapsulation
Configure the M-
LAG interface
connected to LB
Configure the M-
LAG interface
connected to FW
Configure the M-
LAG interface
connected to the
external LACP
aggregate link
Incorporation
by controller
Spine
Border
Required main process
Optional main process
Required subprocess
Optional subprocess
12
Service leaf deployment workflow
Service leaf deployment workflow
Configure the M-LAG
interface connected to
Server 1 LACP
aggregate link
Configure the
physical interface
connected to Server
2 primary and
secondary links
Incorporation
by controller
Underlay basic
settings
Start M-LAG
configuration
M-LAG interface
configuration
Reboot the device
without a startup
configuration file
Configure
hardware resource
parameters
Configure the
management
network
Configure basic
settings for L2VPN
and VXLAN
Configure interfaces
connected to the
spine
Configure the
underlay routing
protocol
Configure the VTEP
address
Enable the OVSDB
VTEP service
Configure the
overlay BGP
Configure the
global MAC
address
Configure the M-
LAG physical
address
End
Configure the M-
LAG virtual address
Configure M-LAG
System parameters
Configure the M-
LAG peer link
aggregate interface
Configure M-LAG
MAD
Configure the peer
link failover
channel
Configure the peer
link bypass channel
Configure VXLAN
default
decapsulation
Server
Leaf
Required main process
Optional main process
Required subprocess
Optional subprocess
Underlay basic
settings
Start M-LAG
configuration
M-LAG interface
configuration
Reboot the device
without a startup
configuration file
Configure hardware
resource parameters
Configure the
management
network
Configure basic
settings for L2VPN
and VXLAN
Configure the
underlay routing
protocol
Configure the VTEP
address
Configure the M-LAG
physical address
End
Configure the M-LAG
virtual address
Configure M-LAG
System parameters
Configure the M-LAG
peer link aggregate
interface
Configure M-LAG
MAD
Service
Leaf
Configure interfaces
connected to the
spine
Enable the OVSDB
VTEP service
Configure the
overlay BGP
Configure the global
MAC address
Configure the peer
link failover channel
Configure the peer
link bypass channel
Configure VXLAN
default
decapsulation
Configure the M-LAG
interface connected to FW
device 4 LACP aggregate
link
Configure the M-LAG
interface connected to FW
device 3 LACP aggregate
link
Incorporation
by controller
Required main process
Optional main process
Required subprocess
Optional subprocess
13
Deployment workflow for spine-border separation scenario
Spine deployment workflow
Spine deployment workflow
Server leaf deployment workflow
Server leaf deployment workflow
Underlay basic
settings
Start
Reboot the device
without a startup
configuration file
Configure
hardware resource
parameters
Configure the
management
network
Configure basic
settings for L2VPN
and VXLAN
Configure interfaces
connected to the
leaf
Configure the
underlay routing
protocol
Configure the VTEP
address
Enable the OVSDB
VTEP service
Configure the
overlay BGP
End
Incorporation
by controller
Spine
Required main process
Optional main process
Required subprocess
Optional subprocess
14
Service leaf deployment workflow
Service leaf deployment workflow
Incorporation
by controller
Underlay basic
settings
Start M-LAG
configuration
M-LAG interface
configuration End
Server
Leaf
Reboot the device
without a startup
configuration file
Configure
hardware resource
parameters
Configure the
management
network
Configure basic
settings for L2VPN
and VXLAN
Configure the
underlay routing
protocol
Configure the VTEP
address
Enable the OVSDB
VTEP service
Configure the
overlay BGP
Configure the global
MAC address
Configure interfaces
connected to the
spine
Required main process
Optional main process
Required subprocess
Optional subprocess
Configure the M-
LAG physical
address
Configure the M-
LAG virtual address
Configure M-LAG
System parameters
Configure the M-
LAG peer link
aggregate interface
Configure M-LAG
MAD
Configure the peer
link failover
channel
Configure the peer
link bypass channel
Configure VXLAN
default
decapsulation
Configure the M-LAG
interface connected to
Server 1 LACP
aggregate link
Configure the
physical interface
connected to Server
2 primary and
secondary links
Required main process
Optional main process
Required subprocess
Optional subprocess
Reboot the device
without a startup
configuration file
Configure
hardware resource
parameters
Configure the
management
network
Configure basic
settings for L2VPN
and VXLAN
Configure the
underlay routing
protocol
Configure the VTEP
address
Enable the OVSDB
VTEP service
Configure the
overlay BGP
Configure the global
MAC address
Configure interfaces
connected to the
spine
Configure the M-
LAG physical
address
Configure the M-
LAG virtual address
Configure M-LAG
System parameters
Configure the M-
LAG peer link
aggregate interface
Configure M-LAG
MAD
Configure the peer
link failover
channel
Configure the peer
link bypass channel
Configure VXLAN
default
decapsulation
Configure the M-LAG
interface connected to
FW device 4 LACP
aggregate link
Configure the M-LAG
interface connected to
FW device 3 LACP
aggregate link
Incorporation
by controller
Underlay basic
settings
Start M-LAG
configuration
M-LAG interface
configuration End
Service
Leaf
15
Border deployment workflow
Border deployment workflow
Procedure (spine-border integration scenario)
Note: As a best practice, use OSPF or IS-IS as the underlay routing protocol. If you need to
configure EBGP, contact Technical Support. #
Configure Spine Border 1
Reboot the device without a startup configuration file
<spine-border1> reset saved-configuration
The saved configuration file will be erased. Are you sure? [Y/N]: y
Configuration file in flash: is being cleared.
Please wait…
Mainboard:
Configuration file is cleared.
<spine-border1> reboot force
Configure hardware resource parameters
CAUTION:
After configuring the hardware resource parameters, reboot the device to make the configuration
take effect.
The hardware resource configuration commands vary by switch. Details are as follows:
Configure the M-
LAG interface
connected to LB
Configure the M-
LAG interface
connected to FW
Configure the M-LAG
interface connected to
the external LACP
aggregate link
Incorporation
by controller
Underlay basic
settings
Start M-LAG
configuration
M-LAG interface
configuration EndBorder
Required main process
Optional main process
Required subprocess
Optional subprocess
Reboot the device
without a startup
configuration file
Configure
hardware resource
parameters
Configure the
management
network
Configure basic
settings for L2VPN
and VXLAN
Configure the
underlay routing
protocol
Configure the VTEP
address
Enable the OVSDB
VTEP service
Configure the
overlay BGP
Configure the global
MAC address
Configure interfaces
connected to the
spine
Configure the M-
LAG physical
address
Configure the M-
LAG virtual address
Configure M-LAG
System parameters
Configure the M-
LAG peer link
aggregate interface
Configure M-LAG
MAD
Configure the peer
link failover
channel
Configure the peer
link bypass channel
Configure VXLAN
default
decapsulation
16
12900E Type X Fabric Module
[spine-border1] hardware-resource tcam normal
[spine-border1] hardware-resource routing-mode ipv6-128
[spine-border1] hardware-resource vxlan l3gw
5944/5945
[spine-border1] hardware-resource switch-mode DUAL-STACK
[spine-border1] hardware-resource routing-mode ipv6-128
[spine-border1] hardware-resource vxlan l3gw
Configure the management network
1. Configure the management VPN. In this example, the VPN instance name is mgmt.
[spine-border1] ip vpn-instance mgmt
2. Configure the default route of the management network. The next hop of the default route is
the gateway IP address on the management switch.
[spine-border1] ip route-static vpn-instance mgmt 0.0.0.0 0 192.168.11.1
3. Configure the management port.
[spine-border1] interface M-GigabitEthernet0/0/0
[spine-border1-M-GigabitEthernet0/0/0] ip binding vpn-instance mgmt
[spine-border1-M-GigabitEthernet0/0/0] ip address 192.168.11.2 255.255.255.0
[spine-border1-M-GigabitEthernet0/0/0] quit
4. Configure a management user. In this example, the username is admin and the password is
Qwert@1234. The password must contain at least two of the following character types: digits,
upper-case letters, lower-case letters, and special characters.
[spine-border1] local-user admin class manage
[spine-border1-luser-manage-admin] password simple Qwert@1234
[spine-border1-luser-manage-admin] service-type https ssh
[spine-border1-luser-manage-admin] authorization-attribute user-role network-admin
[spine-border1-luser-manage-admin] authorization-attribute user-role
network-operator
[spine-border1-luser-manage-admin] quit
5. Configure VTY.
[spine-border1] line vty 0 63
[spine-border1-line-vty0-63] authentication-mode scheme
[spine-border1-line-vty0-63] user-role network-admin
[spine-border1-line-vty0-63] user-role network-operator
[spine-border1-line-vty0-63] quit
6. Configure NETCONF.
[spine-border1] netconf soap https enable
[spine-border1] netconf ssh server enable
7. Enable SSH.
[spine-border1] ssh server enable
8. Configure NTP. This procedure is required if there is an NTP server in the network. Take NTP
server IP 192.168.10.101 as an example.
[spine-border1] ntp-service enable
[spine-border1] ntp-service unicast-server 192.168.10.101 vpn-instance mgmt
9. Configure SNMP.
[spine-border1] snmp-agent
[spine-border1] snmp-agent community write private
[spine-border1] snmp-agent community read public
/