JD247A

Aruba JD247A Configuration Guide

  • Hello! I am an AI chatbot trained to assist you with the Aruba JD247A Configuration Guide. I’ve already reviewed the document and can help you find the information you need or explain it in simple terms. Just ask your questions, and providing more details will help me assist you more effectively!
Contents
Example: Deploying DRNI and centralized EVPN gateway ··························· 1
Network configuration ········································································································································ 1
Overlay connectivity models ······························································································································ 7
Applicable product matrix ··································································································································· 7
Configuring HPE FlexFabric 5940 switches as leaf nodes ················································································ 8
Procedure summary ··································································································································· 8
Configuring resource modes ······················································································································ 8
Configuring OSPF ······································································································································ 8
Configuring the links towards the spine tier ····························································································· 11
Configuring L2VPN ·································································································································· 11
Configuring DRNI ····································································································································· 13
Configuring the links towards the bare metal servers ·············································································· 18
Configure the spanning tree feature ········································································································· 19
Configuring an underlay BGP instance ···································································································· 20
Configuring an EVPN BGP instance ········································································································ 21
Configuring VSIs and ACs ······················································································································· 22
Configuring HPE FlexFabric 5945 switches as border nodes ·········································································· 24
Procedure summary ································································································································· 24
Configure basic settings ··························································································································· 24
Configuring OSPF ···································································································································· 25
Configure the spanning tree feature ········································································································· 27
Configuring the interfaces connected to the spine nodes ········································································ 27
Configuring L2VPN ·································································································································· 28
Configuring DRNI ····································································································································· 29
Configuring the routed interfaces connected to the external network ······················································ 33
Configuring an underlay BGP instance ···································································································· 34
Configuring an EVPN BGP instance ········································································································ 36
Configuring the overlay network ··············································································································· 37
Configuring HPE FlexFabric 12900E switches as spine nodes ······································································· 40
Procedure summary ································································································································· 40
Configuring OSPF ···································································································································· 40
Configuring the downlinks towards the leaf tier························································································ 41
Configuring the uplinks towards the border nodes ··················································································· 41
Configuring an underlay BGP instance ···································································································· 42
Configuring an EVPN BGP instance ········································································································ 44
Traffic model ···················································································································································· 45
About the traffic model ····························································································································· 45
Overlay traffic ··········································································································································· 46
Convergence performance test results ············································································································ 47
Failure test results ···································································································································· 47
Verifying the configuration································································································································ 48
Verification commands ····························································································································· 48
Procedure ················································································································································· 49
Upgrading the devices ····································································································································· 51
Upgrading a leaf device ··························································································································· 51
Upgrading a spine device ························································································································· 52
Upgrading a border device ······················································································································· 53
Expanding the network····································································································································· 54
Adding a leaf device ································································································································· 54
Replacing hardware ········································································································································· 54
Replacing an interface module················································································································· 54
Replacing a switching fabric module ········································································································ 55
1
Example: Deploying DRNI and centralized
EVPN gateway
Network configuration
As shown in Figure 1, deploy two border devices as a DR system and use it as a centralized
gateway. The DR system provides intra-DC connectivity, external connectivity, and DCI.
The following is the network configuration:
Use DRNI to build the two border devices into a DR system. Configure the DR system as a
centralized EVPN gateway to provide L3 forwarding for VXLANs and as a border node to
provide connectivity to the external network.
Deploy two HPE FlexFabric 12900E switches at the spine tier. Configure them as route
reflectors (RRs) to reflect BGP EVPN routes among border and leaf devices.
Use DRNI to deploy two pairs of access switches (ToR) switches as DR systems at the leaf tier.
They provide EVPN access services to connect servers (for example, bare metal servers) to
their overlay networks.
2
Figure 1 Network diagram
Device
Interface
IP address
Remarks
Leaf 1
XGE 1/0/7 N/A
Member port of a DR interface, on
which Ethernet service instances are
configured to act as attachment
circuits (ACs).
Connected to Server A (bare metal).
XGE 1/0/8 N/A
Member port of a DR interface, on
which Ethernet service instances are
configured to act as attachment
circuits (ACs).
Connected to Server C (bare metal).
Server A
Peer link
Keepalive
XGE 1/0/7
Leaf 2
XGE 2/0/7
XGE 1/0/47
HGE 1/0/49
HGE 1/0/50 HGE 2/0/49
HGE 2/0/50
XGE 2/0/47
HGE 1/0/53 HGE 1/0/25
XGE 1/0/17 XGE 1/0/1
Server B
Peer link
Keepalive
XGE 1/0/7
XGE 2/0/7
XGE 1/0/17 XGE 2/0/17
Keepalive
HGE 1/0/25
HGE 1/0/26 HGE 1/0/25
HGE 1/0/26
WGE 1/0/1WGE 1/0/1 Border 2
XGE 1/0/21
XGE 1/0/22 XGE 2/0/21
XGE 2/0/22
Peer link
Leaf 3Leaf 4
HGE 1/0/28 HE 2/0/54
XGE 2/0/7
XGE 2/0/5
XGE 1/0/2
XGE 1/0/3
XGE 1/0/1
XGE 2/0/3
XGE 2/0/1
XGE 1/0/4
XGE 2/0/1
WGE 1/0/53
WGE 1/0/55
XGE 2/0/2
WGE 1/0/53
WGE 1/0/55
XGE 1/0/22
XGE 1/0/21
Border 1
L3switch
Leaf 1
Spine 2
Loop0
10.182.224.90/32
Loop1
10.182.226.90/32
Loop0
10.182.224.89/32
Loop1
10.182.226.89/32
Spine 1
WGE 1/0/33 WGE 1/0/33
Server C
Loop0
10.182.224.246/32
Loop1
10.182.226.111/32
Loop0
10.182.224.111/32
Loop1
10.182.226.111/32
Loop0
10.182.224.121/32
Loop1
10.182.226.121/32
Loop0
10.182.224.121/32
Loop1
10.182.226.121/32
XGE 1/0/8
XGE 2/0/8
XGE 1/0/3
XGE 2/0/17
IPL IPL
IPL
3
HGE 1/0/49 N/A IPP for IPL
establishment between
DR member devices.
Connected to HGE 2/0/49 on Leaf 2.
HGE 1/0/50 N/A Member port of the IPP.
Connected to HGE 2/0/50 on Leaf 2.
XGE 1/0/47 1.0.0.1/30 Keepalive link between DR member
devices.
Connected to XGE 2/0/47 on Leaf 2.
HGE 1/0/53 N/A
IP address borrowed from Loopback
0.
Connected to HGE 1/0/25 on Spine
1.
XGE 1/0/17 N/A IP address borrowed from Loopback
0.
Connected to XGE 1/0/1 on Spine 2.
Loopback 0 10.182.224.111/32
VTEP IP address for establishing
BGP EVPN peering.
Loopback 1 10.182.226.111/32 Virtual VTEP address for the DR
system
to establish VXLAN tunnels
to remote devices.
Vlan-interface 1999 192.168.220.1/30 IP address for establishing Layer 3
connectivity with the peer DR
member device.
Leaf 2
XGE 2/0/7 N/A
Member port of a DR interface, on
which Ethernet service instances are
configured to act as attachment
circuits (ACs).
Connected Server A (bare metal).
XGE 2/0/8 N/A
Member port of a DR interface, on
which Ethernet service instances are
configured to act as attachment
circuits (ACs).
Connected to Server C (bare metal).
HGE 2/0/49 N/A IPP for IPL
establishment between
DR member devices.
Connected to HGE 1/0/49 on Leaf 1.
HGE 2/0/50 N/A IPP for IPL
establishment between
DR member devices.
Connected to HGE 1/0/50 on Leaf 1.
XGE 2/0/47 1.0.0.2/30 Keepalive link between DR member
devices.
Connected to XGE 1/0/47 on Leaf 1.
HGE 2/0/54 N/A
IP address borrowed from Loopback
0.
Connected to HGE 1/0/28 on Spine
1.
XGE 2/0/17 N/A IP address borrowed from Loopback
0.
Connected to XGE 1/0/2 on Spine 2.
4
Loopback 0 10.182.224.246/32 VTEP IP
address for establishing
BGP EVPN peering.
Loopback 1 10.182.226.111/32 Virtual VTEP address for the DR
system
to establish VXLAN tunnels
to remote devices.
Vlan-interface 1999 192.168.220.2/30 IP address for establishing Layer 3
connectivity with the peer DR
member device.
Leaf 3
XGE 1/0/7 N/A
Member port of a DR interface, on
which Ethernet service instances are
configured to act as attachment
circuits (ACs).
Connected Server B (bare metal).
XGE 1/0/21 N/A
Member port of the IPP for IPL
establishment between DR member
devices.
Connected to XGE 2/0/21 on Leaf 4.
XGE 1/0/22 N/A
Member port of the IPP for IPL
establishment between DR member
devices.
Connected to XGE 2/0/22 on Leaf 4.
XGE 1/0/17 1.1.0.1/30 Keepalive link.
Connected to XGE 2/0/17 on Leaf 4.
XGE 1/0/3 N/A IP address borrowed from Loopback
0.
Connected to XGE 2/0/5 on Spine 1.
XGE 1/0/1 N/A IP address borrowed from Loopback
0.
Connected to XGE 1/0/3 on Spine 2.
Loopback 0 10.182.224.121/32 VTEP IP address for establishing
BGP EVPN peering.
Loopback 1 10.182.226.121/32 Virtual VTEP address for the DR
system
to establish VXLAN tunnels
to remote devices.
Vlan-interface 1999 192.168.220.9/30 IP address for establishing Layer 3
connectivity with the peer DR
member device.
Leaf 4
XGE 2/0/7 N/A
Member port of a DR interface, on
which Ethernet service instances are
configured to act as attachment
circuits (ACs).
Connected to a bare metal server.
XGE 2/0/21 N/A
Member port of the IPP for IPL
establishment between DR member
devices.
Connected to XGE 1/0/21 on Leaf 3.
XGE 2/0/22 N/A
Member port of the IPP for IPL
establishment between DR member
devices.
Connected to XGE 1/0/22 on Leaf 3.
5
XGE 2/0/17 1.1.0.2/30 Keepalive link between DR member
devices.
Connected to XGE 1/0/17 on Leaf 3.
XGE 2/0/3 N/A IP address borrowed from Loopback
0.
Connected to XGE 2/0/7 on Spine 1.
XGE 2/0/1 N/A IP address borrowed from Loopback
0.
Connected to XGE 1/0/4 on Spine 2.
Loopback 0 10.182.224.122/32
VTEP IP address for establishing
BGP EVPN peering.
Loopback 1 10.182.226.121/32 Virtual VTEP address for the DR
system
to establish VXLAN tunnels
to remote devices.
Vlan-interface 1999 192.168.220.10/30 IP address for establishing Layer 3
connectivity with the peer DR
member device.
Spine 1
HGE 1/0/25 N/A IP address borrowed from Loopback
0.
Connected to HGE 1/0/53 on Leaf 1.
HGE 1/0/28 N/A IP address borrowed from Loopback
0.
Connected to HGE 2/0/54 on Leaf 2.
XGE 2/0/5 N/A IP address borrowed from Loopback
0.
Connected to XGE 1/0/3 on Leaf 3.
XGE 2/0/7 N/A IP address borrowed from Loopback
0.
Connected to XGE 2/0/3 on Leaf 4.
XGE 2/0/1 10.182.221.0/31 Connected to WGE 1/0/53 on Border
1.
XGE 2/0/2 10.182.221.10/31 Connected to WGE 1/0/53 on Border
2.
Loopback 0 10.182.224.90/32 IP address for underlay routing.
Loopback 1 10.182.226.90/32 IP address for overlay routing.
Spine 2
XGE 1/0/1 N/A IP address borrowed from Loopback
0.
Connected to XGE 1/0/17 on Leaf 1.
XGE 1/0/2 N/A IP address borrowed from Loopback
0.
Connected to XGE 2/0/17 on Leaf 2.
XGE 1/0/3 N/A IP address borrowed from Loopback
0.
Connected to XGE 1/0/1 on Leaf 3.
XGE 1/0/4 N/A IP address borrowed from Loopback
0.
6
Connected to XGE 2/0/1 on Leaf 4.
XGE 1/0/21 10.182.221.4/31 Connected to WGE 1/0/55 on Border
1.
XGE 1/0/22 10.182.221.14/31 Connected to WGE 1/0/55 on Border
2.
Loopback 0 10.182.224.89/32 IP address for underlay routing.
Loopback 1 10.182.226.89/32 IP address for overlay routing.
Border 1
WGE 1/0/53 10.182.221.1/31 Connected to XGE 2/0/1 on Spine 1.
WGE 1/0/55 10.182.221.5/31 Connected to XGE 1/0/21 on Spine
2.
HGE 1/0/25 N/A
IPP for IPL
establishment between
DR member devices.
Connected to HGE 1/0/25 on Border
2.
HGE 1/0/26 N/A
IPP for IPL
establishment between
DR member devices.
Connected to HGE 1/0/26 on Border
2.
WGE 1/0/1 2.0.0.1/31
Keepalive link between DR member
devices.
Connected to WGE 1/0/1 on Border
2.
WGE 1/0/33 192.101.1.1/31 Connected to the L3 switch.
Loopback 0 10.182.234.1/32 IP address for the device to establish
IGP and BGP peering as an edge
device (ED).
Loopback 1 10.182.236.1/32 Virtual IP address for the DR system
to establish IGP and BGP peering as
an ED.
Vlan-interface 1001 192.101.1.101/31 IP address for establishing Layer 3
connectivity with the peer DR
member device.
Border 2
WGE 1/0/53 10.182.221.11/31 Connected to XGE 2/0/2 on Spine 1.
WGE 1/0/55 10.182.221.15/31 Connected to XGE 1/0/22 on Spine
2.
HGE 1/0/25 N/A
IPP for IPL
establishment between
DR member devices.
Connected to HGE 1/0/25 on Border
1.
HGE 1/0/26 N/A
IPP for IPL
establishment between
DR member devices.
Connected to HGE 1/0/26 on Border
1.
WGE 1/0/1 2.0.0.2/31
Keepalive link between DR member
devices.
Connected to WGE 1/0/1 on Border
2.
WGE 1/0/33 192.101.1.3/31 Connected to the L3 switch.
7
Loopback 0 10.182.234.2/32 IP address for the device to establish
IGP and BGP peering as an ED.
Loopback 1 10.182.236.1/32 Virtual IP address for the DR system
to establish IGP and BGP peering as
an ED.
Vlan-interface 1001 192.101.1.100/31 IP address for establishing an IPL
with the peer DR member device.
Overlay connectivity models
The following are the types of connectivity between bare metal servers and between a bare metal
server and the external network:
Layer 2 connectivity between bare metal servers attached to the same DR system at the leaf
tier.
Layer 3 connectivity between bare metal servers attached to the same DR system at the leaf
tier.
Layer 2 connectivity between bare metal servers attached to different DR systems at the leaf
tier.
Layer 3 connectivity between bare metal servers attached to different DR systems at the leaf
tier.
Layer 3 connectivity between bare metal servers and the external network.
Applicable product matrix
IMPORTANT:
In addition to running an applicable software version, you must also install the most recent patch, if
any.
Role
Spine
(Type K) R5210
(Type X) R7624P08
Leaf or border
HPE FlexFabric 5940 & 5710 Switch
Series
This example uses 5940 switches as leaf
nodes.
R6710
Series
This example uses 5945
border nodes.
R6710
SDN controller Contact Hewlett Packard Enterprise Support for version compatibility.
8
Configuring HPE FlexFabric 5940 switches as
leaf nodes
This example describes the procedure to deploy nodes Leaf 1 and Leaf 2. The same procedure
applies to nodes Leaf 3 and Leaf 4.
Procedure summary
Configuring resource modes
Configuring OSPF
Configuring the links towards the spine tier
Configuring L2VPN
Configuring DRNI
Configuring the links towards the bare metal servers
Configure the spanning tree feature
Configuring an underlay BGP instance
Configuring VSIs and ACs
Configuring resource modes
Leaf 1
Leaf 2
Description
Purpose
Remarks
hardware-resource
routing-mode IPv6-128 hardware-resource
routing-mode IPv6-128
Enable support for
IPv6 routes with
prefixes longer than
64 bits.
N/A
Reboot the
device for this
setting to take
effect.
The HPE
FlexFabric
12900E (Type
K)
switches do
not support this
command.
hardware-resource vxlan
l2gw hardware-resource
vxlan l2gw
Set the VXLAN
hardware resource
mode to Layer 2
gateway mode.
N/A
Reboot the
device for this
setting to take
effect.
The HPE
FlexFabric
12900E (Type
K)
switches do
not support this
command.
Configuring OSPF
Leaf 1
Leaf 2
Description
Purpose
Remarks
ospf 1 router-id
10.182.224.111 ospf 1 router-id
10.182.224.246
Enable an OSPF
process and enter
its view. N/A N/A
9
Leaf 1
Leaf 2
Description
Purpose
Remarks
spf-schedule-interval 1 10
10 spf-schedule-interval 1 10
10
Set the maximum
OSPF SPF
calculation
interval to 1
second, the
minimum OSPF
SPF calculation
interval to 10
milliseconds, and
the incremental
OSPF SPF
calculation
interval to 10
milliseconds.
Shorten the
SPF
calculation
interval to
accelerate
route
convergenc
e.
N/A
lsa-generation-interval 1
10 10 lsa-generation-interval 1
10 10
Set the maximum
interval for LSA
generation to 1
second, the
minimum interval
to 10
milliseconds, and
the incremental
interval to 10
milliseconds.
Enable
quicker LSA
regeneration
upon
network
topology
change to
accelerate
route
convergenc
e.
N/A
area 0.0.0.0 area 0.0.0.0
Create OSPF
area 0. N/A N/A
fast-reroute lfa fast-reroute lfa
Enable OSPF
FRR and use the
LFA algorithm for
calculation of the
backup next hop.
This feature
minimizes
service
interruption
by fast
rerouting
traffic to the
backup path
when a link
or node fails.
N/A
quit quit Return to system
view. N/A N/A
interface LoopBack0 interface LoopBack0
Create interface
Loopback 0 and
enter its view. N/A N/A
ip address
10.182.224.111
255.255.255.255
ip address
10.182.224.246
255.255.255.255
Assign an IP
address to the
interface.
VTEP IP
addre
ss for
establishing
BGP EVPN
peering.
N/A
ospf 1 area 0.0.0.0 ospf 1 area 0.0.0.0
Enable OSPF on
the interface. N/A N/A
quit quit
Return to system
view. N/A N/A
interface LoopBack1 interface LoopBack1
Create interface
Loopback 1 and
enter its view. N/A N/A
10
Leaf 1
Leaf 2
Description
Purpose
Remarks
ip address
10.182.226.111
255.255.255.255
ip address
10.182.226.111
255.255.255.255
Assign an IP
address to the
interface.
Virtual
VTEP
address for
the DR
system to
establish
VXLAN
tunnels to
remote
devices.
N/A
ospf 1 area 0.0.0.0 ospf 1 area 0.0.0.0
Enable OSPF on
the interface. N/A N/A
quit quit
Return to system
view. N/A N/A
vlan 1999 vlan 1999
Create the
VLAN for
configuring the
VLAN interface
used for
establishing L3
connectivity
between the peer
DR member
devices.
N/A N/A
interface
Vlan-interface1999 interface
Vlan-interface1999
Create
VLAN-interface
1999 and enter its
view.
Specify the
IP
addresses
for
establishing
L3
connectivity
between the
peer DR
member
devices.
When the
uplink on
one DR
member
device fails,
the uplink
traffic that
arrives on
that member
device can
traverse the
established
L3
connectivity
to the other
DR member
device and
go outside.
ip address 192.168.220.1
255.255.255.252 ip address 192.168.220.2
255.255.255.252
Assign an IP
address to the
interface. N/A N/A
ospf network-type
broadcast ospf network-type
broadcast
Set the OSPF
network type of
the interface to
broadcast.
N/A N/A
ospf 1 area 0.0.0.0 ospf 1 area 0.0.0.0
Enable OSPF on
the interface. N/A N/A
quit quit
Return to system
view. N/A N/A
11
Configuring the links towards the spine tier
Leaf 1
Leaf 2
Description
Purpose
Remarks
interface
Ten-GigabitEthernet1/0/1
7
interface
Ten-GigabitEthernet2/0/1
7
Configure the
interface
connected to
Spine 2.
N/A N/A
port link-mode route port link-mode route
Configure the
interface to
operate in route
mode as a Layer
3 interface.
N/A N/A
ip address unnumbered
interface LoopBack0
ip address unnumbered
interface LoopBack0
Configure the
interface to
borrow the IP
address of
Loopback 0.
N/A N/A
ospf 1 area 0.0.0.0 ospf 1 area 0.0.0.0
Enable OSPF on
the interface. N/A N/A
ospf network-type p2p ospf network-type p2p
Set the OSPF
network type of
the interface to
P2P.
N/A
interface
HundredGigE1/0/53 interface
HundredGigE1/0/53
Configure the
interface
connected to
Spine 1.
N/A N/A
port link-mode route port link-mode route
Configure the
interface to
operate in route
mode as a Layer
3 interface.
N/A N/A
ip address unnumbered
interface LoopBack0
ip address unnumbered
interface LoopBack0
Configure the
interface to
borrow the IP
address of
Loopback 0.
N/A N/A
ospf 1 area 0.0.0.0 ospf 1 area 0.0.0.0 Enable the OSPF
on the interface. N/A N/A
ospf network-type p2p ospf network-type p2p
Set the OSPF
network type of
the interface to
P2P.
N/A N/A
Configuring L2VPN
Leaf 1
Leaf 2
Description
Purpose
Remarks
l2vpn enable l2vpn enable Enable L2VPN. N/A N/A
vxlan tunnel mac-learning
disable vxlan tunnel mac-learning
disable
Disable remote
MAC address
learning for
VXLANs.
This setting
avoids the
conflict
between
N/A
12
Leaf 1
Leaf 2
Description
Purpose
Remarks
automaticall
y learned
MAC
address
entries and
MAC
address
entries
advertised
through
BGP EVPN.
vxlan tunnel arp-learning
disable vxlan tunnel arp-learning
disable
Disable remote
ARP learning for
VXLANs.
This setting
avoids the
conflict
between
automaticall
y learned
ARP entries
and ARP
entries
advertised
through
BGP EVPN.
N/A
vxlan tunnel nd-learning
disable vxlan tunnel nd-learning
disable
Disable remote
ND learning for
VXLANs.
This setting
avoids the
conflict
between
automaticall
y learned
ND entries
and ND
entries
advertised
through
BGP EVPN.
N/A
mac-
address timer aging
3600 mac-
address timer aging
3600
Set the aging
timer to 3600
seconds for
dynamic MAC
address entries.
If the DR
system has
a large
number of
MAC
address
entries,
increase the
MAC aging
timer value
to ensure
complete
synchronizat
ion of MAC
address
entries when
one of the
DR member
devices
restarts.
This setting
must be
consistent
between the
peer
member
devices in a
DR system.
mac-address mac-move
fast-update mac-address mac-move
fast-update
Enable ARP fast
update for MAC
address moves.
This setting
helps
accelerate
VM
migration
across the
N/A
13
Leaf 1
Leaf 2
Description
Purpose
Remarks
network.
Configuring DRNI
Leaf 1
Leaf 2
Description
Purpose
Remarks
l2vpn drni peer-link
ac-match-rule
vxlan-mapping
l2vpn drni peer-link
ac-match-rule
vxlan-mapping
Enable the device
to create frame
match criteria
based on VXLAN
IDs for the
dynamic ACs on
the Ethernet
aggregate IPL.
N/A N/A
evpn drni group
10.182.226.111 evpn drni group
10.182.226.111
Enable EVPN
distributed relay
and set the virtual
VTEP address.
The DR
member
devices
(VTEPs) use
the virtual
VTEP
address to
establish
tunnels with
the remote
VTEPs.
You must
specify the
same virtual
VTEP
address on
both VTEPs
in the same
DR system.
evpn drni local
10.182.224.111 remote
10.182.224.246
evpn drni local
10.182.224.246 remote
10.182.224.111
Specify the IP
addresses of the
local and peer
VTEPs in the
EVPN distributed
relay system.
You must
execute this
command if
a DR system
uses an
Ethernet
aggregate
link as the
IPL
and has
ACs (called
single-arme
d ACs)
attached to
only one of
the member
devices. It
enables the
VTEPs in
the DR
system to
set the next
hop of the
routes for
single-arme
d ACs to
their local
VTEP IP
addresses
when they
advertise
the routes.
This
mechanism
ensures that
The
specified
local and
remote
VTEP
addresses
must each
belong to an
interface on
the local or
peer VTEP
in the DR
system,
respectively.
Make sure
the local
VTEP
address on
one VTEP is
the remote
VTEP
address on
the other.
14
Leaf 1
Leaf 2
Description
Purpose
Remarks
the traffic
destined for
a
single-arme
d
AC is
forwarded
towards its
attached
VTEP
instead of
the other
VTEP.
evpn global-mac
00e0-fc00-580a evpn global-mac
00e0-fc00-580a
Configure an
EVPN global
MAC address. N/A
You must
specify the
same EVPN
global MAC
a
ddress on
the devices
in the same
DR system.
Do not use a
reserved
MAC
address as
the EVPN
global MAC
address.
drni system-mac
00e0-fc00-5800 drni system-mac
00e0-fc00-5800
Set the MAC
address of the
DR system. Required.
You must
assign the
same DR
system MAC
address to
the member
devices in a
DR system.
drni system-number 1 drni system-number 2 Set the DR
system number. Required.
You must
assign
different DR
system
numbers to
the member
devices in a
DR system.
drni system-priority 100 drni system-priority 100
(Optional.) Set
the DR system
priority. N/A
You must
set the same
DR system
priority on
the member
devices in a
DR system.
drni standalone enable drni standalone enable Enable DRNI
standalone mode. N/A N/A
interface
Ten-GigabitEthernet
1/0/47
interface
Ten-GigabitEthernet
2/0/47
Enter the
interface view for
the keepalive link. Required. N/A
port link-mode route port link-mode route
Configure the
interface for
keepalive
Required. N/A
15
Leaf 1
Leaf 2
Description
Purpose
Remarks
detection to
operate in route
mode as a Layer
3 interface.
ip address 1.0.0.1 24 ip address 1.0.0.2 24
Assign an IP
address to the
interface as
planned.
Required. N/A
quit quit
Return to system
view. N/A N/A
drni
keepalive ip
destination 1.0.0.2 source
1.0.0.1
drni
keepalive ip
destination 1.0.0.1 source
1.0.0.2
Configure the
source and
destination IP
addresses of
keepalive
packets.
Required.
For correct
keepalive
detection,
you must
exclude the
interfaces
that own the
IP
addresses
used for
keepalive
detection
from the
shutdown
action.
drni mad default-action
none drni mad default-action
none
Set the DRNI
MAD
action to
none. When the
DR system splits,
DRNI MAD will
not shut down
any network
interfaces, except
the interfaces
configured
manually or by
the system to be
shut down
on the
secondary
device.
N/A N/A
drni mad include interface
HundredGigE1/0/53 drni mad include interface
HundredGigE2/0/54
Configure DRNI
MAD
to shut
down the
interface upon a
DR system split if
the device is the
secondary DR
member device.
N/A N/A
drni mad include interface
Ten-GigabitEthernet1/0/1
7
drni mad include interface
Ten-GigabitEthernet2/0/1
7
Configure DRNI
MAD
to shut
down the
interface upon a
DR system split if
the device is the
secondary DR
member device.
N/A N/A
drni restore-delay 200 drni restore-delay 200
Set the data
restoration This
command N/A
16
Leaf 1
Leaf 2
Description
Purpose
Remarks
interval.
specifies the
maximum
amount of
time for the
secondary
DR member
device to
synchronize
data with the
primary DR
member
device
during DR
system
setup.
Within the
data
restoration
interval, the
secondary
DR member
device sets
all network
interfaces to
DRNI MAD
DOWN state
except those
excluded
from the
MAD
shutdown
action.
To avoid
packet loss
and
forwarding
failure,
increase the
data
restoration
interval if the
amount of
data is large,
for example,
when the
device has a
large
number of
routes and
interfaces.
interface
Bridge-Aggregation11 interface
Bridge-Aggregation11
Create the Layer
2 aggregate
interface to be
used as the IPP
and enter its
interface view.
Configure
the Layer 2
aggregation
interfaces
that act as
the IPP
s at
the two ends
of the IPL.
N/A
port link-type trunk port link-type trunk
Set the link type
of the interface to
trunk. N/A N/A
17
Leaf 1
Leaf 2
Description
Purpose
Remarks
port trunk permit vlan all port trunk permit vlan all
Configure the
interface to permit
all VLANs to pass
through.
N/A N/A
quit quit
Return to system
view. N/A N/A
interface HundredGigE
1/0/49 interface HundredGigE
1/0/49
Enter the
interface view for
the port to be
used as a
member port of
the IPP.
Configure
the member
ports of the
Layer 2
aggregation
interfaces
that act as
the IPP
s at
the two ends
of the IPL.
N/A
port link-type trunk port link-type trunk
Set the link type
of the port to
trunk. N/A N/A
port trunk permit vlan all port trunk permit vlan all
Configure the port
to permit all
VLANs to pass
through.
N/A N/A
port link-aggregation
group 11 port link-aggregation
group 11
Assign the port to
the link
aggregation
group for the IPP
(aggregation
group 11).
N/A N/A
interface HundredGigE
1/0/50 interface HundredGigE
1/0/50
Enter the
interface view for
the port to be
used as a
member port of
the IPP.
Configure
the member
ports of the
Layer 2
aggregation
interfaces
that act as
the IPP
s at
the two ends
of the IPL.
N/A
port link-type trunk port link-type trunk
Set the link type
of the port to
trunk. N/A N/A
port trunk permit vlan all port trunk permit vlan all
Configure the port
to permit all
VLANs to pass
through.
N/A N/A
port link-aggregation
group 11 port link-aggregation
group 11
Assign the port to
the link
aggregation
group for the IPP
(aggregation
group 11).
N/A N/A
interface
Bridge-Aggregation11 interface
Bridge-Aggregation11
Enter the
interface view for
the IPP
(Bridge-Aggregati
Specify the
aggregate
interface
(Bridge-Aggr
N/A
18
Leaf 1
Leaf 2
Description
Purpose
Remarks
on 11).
egation 11)
as the IPP.
link-
aggregation mode
dynamic link-
aggregation mode
dynamic
Configure the
aggregate
interface to
operate in
dynamic mode.
N/A N/A
port drni intra-portal-port
1 port drni intra-portal-port 1
Specify the
aggregate
interface as the
IPP.
N/A N/A
undo mac-address static
source-check enable undo mac-
address static
source-check enable
Disable the static
source check
feature on the
interface.
This
command
ensures that
the DR
member
devices can
correctly
forward the
Layer 3
traffic
received
from each
other over
the IPL.
You must
disable
static source
check on the
IPP
s of all
leaf nodes
and their
uplink ports
connected to
the spine
tier.
quit quit N/A N/A N/A
NOTE:
If
a DR system uses an Ethernet aggregate link as the IPL, each DR member device creates a
dynamic AC on the
IPL when an AC (Ethernet service instance) is configured on a site-facing
interface. The dynamic AC and the site
-facing AC have the same frame match criteria and VSI
mapping. If two site
-facing ACs on different interfaces have the same frame match criteria but
different VSI mappings, the dynamic ACs created for the site
-facing ACs will conflict with each
other. To prevent this issue, use the
l2vpn drni peer-link ac-match-rule vxlan-
mapping
command to enable the
DR member devices to create frame match criteria based on VXLAN IDs
for the dynamic ACs on the IPL.
Configuring the links towards the bare metal servers
Leaf 1
Leaf 2
Description
Purpose
Remarks
interface
Bridge-Aggregation1 interface
Bridge-Aggregation1
Create an
aggregate
interface to be
used as a DR
interface.
Configure
the DR
interfaces
connected
to the bare
metal
servers.
N/A
port link-type trunk port link-type trunk
Set the link type
of the interface to
trunk. N/A N/A
link-
aggregation mode
dynamic link-
aggregation mode
dynamic
Configure the
aggregate
interface to
operate in
N/A Configure
the interface
to permit all
VLANs to
19
Leaf 1
Leaf 2
Description
Purpose
Remarks
dynamic mode.
pass
through.
port drni group 1 port drni group 1
Assign the
aggregate
interface to a DR
group.
N/A N/A
interface
Ten-GigabitEthernet1/0/7 interface
Ten-GigabitEthernet 2/0/7
Enter the view of
a member
physical interface
of the DR
interface.
N/A N/A
port link-type trunk port link-type trunk
Set the link type
of the interface to
trunk. N/A N/A
port link-aggregation
group 1 port link-aggregation
group 1
Add the physical
interface
to the
link aggregation
group for the DR
interface.
N/A N/A
quit quit N/A N/A N/A
Configure the spanning tree feature
Leaf 1
Leaf 2
Description
Purpose
stp global enable stp global enable Enable
the spanning
tree feature globally. N/A
interface
Bridge-Aggregation 1 interface
Bridge-Aggregation 1
Enter the view of the
DR interface
connected to the bare
metal servers.
N/A
stp edged-port stp edged-port N/A
Configure the DR
interface
as an edge
port to exclude the port
from spanning tree
calculation for rapid
state transition.
IMPORTANT:
Make sure the DR member devices are consistent in global, IPP-specific, and
DRNI-interface-specific spanning tree settings. Inconsistent spanning tree settings might cause
network flapping.
IPPs in the DR system do not participate in spanning tree calculation.
After the DR system splits, the DR member devices still use the DR system MAC address to
send BPDUs, resulting in incorrect spanning tree calculation. To avoid this issue, enable DRNI
standalone mode on the DR member devices.
/