HPE FlexFabric Switches DRNI Configuration Guide

Category
Networking
Type
Configuration Guide
Document version: 6W100-20231114
HPE FlexFabric Switches
DRNI Configuration Guide
© Copyright 2023 Hewlett Packard Enterprise Development LP
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard
Enterprise products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett
Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or
copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s
standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise
website.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in the
United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java and Oracle are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
1
Contents
DRNI network planning ·················································································· 1
Comparison between IRF and DRNI ················································································································· 1
Overlay network planning··································································································································· 2
Underlay network planning································································································································· 5
Restrictions and guidelines for DR system setup······························································································· 7
Restrictions and guidelines ································································································································ 9
DRNI network models ·················································································· 13
Layer 2 DRNI network models ························································································································· 13
Loop prevention on a DR system ············································································································· 13
DRNI and spanning tree ··························································································································· 13
DRNI and VSI-based loop detection ········································································································ 15
Layer 3 DRNI network models ························································································································· 17
Gateway deployment schemes ················································································································ 17
Dual-active VLAN interfaces ···················································································································· 17
Routing neighbor relationship setup on dual-active VLAN interfaces using DRNI virtual IP addresses ·· 22
VRRP gateways ······································································································································· 23
Restrictions and guidelines for single-homed servers attached to non-DR interfaces ····························· 25
Restrictions and guidelines for routing configuration ··············································································· 26
DRNI and RDMA ·············································································································································· 26
Network model ········································································································································· 26
Restrictions and guidelines ······················································································································ 28
DRNI and EVPN··············································································································································· 28
Distributed gateway deployment ·············································································································· 29
Centralized gateway deployment ············································································································· 30
Failover between data centers ················································································································· 31
Basic configuration restrictions and guidelines ························································································ 31
Restrictions and guidelines for IPL ACs ··································································································· 33
MAC address configuration restrictions and guidelines ··········································································· 33
Leaf device configuration restrictions and guidelines ··············································································· 34
Border and ED configuration restrictions and guidelines ········································································· 34
Restrictions and guidelines for server access in active/standby mode ···················································· 34
Routing neighbor relationship setup on a DR system formed by distributed EVPN gateways ························ 34
DRNI, EVPN, and DHCP relay ························································································································ 35
About this deployment scheme ················································································································ 35
Restrictions and guidelines ······················································································································ 37
EVPN distributed relay, microsegmentation, and service chain······································································· 37
Network model ········································································································································· 37
DRNI and underlay multicast ··························································································································· 39
DRNI and MVXLAN·········································································································································· 39
DRNI and DCI ·················································································································································· 41
Management network design ··························································································································· 42
High availability for DRNI ············································································· 44
High availability of uplinks ································································································································ 44
High availability of leaf devices ························································································································ 44
High availability of border devices···················································································································· 47
Recommended hardware and software versions ········································· 50
1
DRNI network planning
Comparison between IRF and DRNI
The Intelligent Resilient Framework (IRF) technology is developed by HPE to virtualize multiple
physical devices at the same layer into one virtual fabric to provide data center class availability and
scalability. IRF virtualization technology offers processing power, interaction, unified management,
and uninterrupted maintenance of multiple devices.
Distributed Resilient Network Interconnect (DRNI) virtualizes two physical devices into one system
through multi-chassis link aggregation for device redundancy and traffic load sharing.
Table 1 shows the differences between IRF and DRNI. For high availability and short service
interruption during software upgrade, use DRNI. You cannot use IRF and DRNI in conjunction on
the same device.
Table 1 Comparison between IRF and DRNI
Item
IRF
DRNI
Control plane
• The IRF member devices have a
unified control plane for central
management.
•
The IRF member devices
synchronize all forwarding
entries.
• The control plane of the DR member
devices is separate.
• The DR member devices synchronize
entries such as MAC, ARP, and ND entries.
Device
requirements
• Hardware: The chips of the IRF
member devices must have the
same architecture, and typically
the IRF member devices are from
the same series.
•
Software: The IRF member
devices must run the same
software version.
• Hardware: The DR member devices can be
different models.
•
Software: Some device models can run
different software versions when they act as
DR member devices.
Full support for
differe
nt software versions will be
implemented in the future.
Software
upgrade
•
The IRF member devices are
upgraded simultaneously or
separately. A separate upgrade is
complex.
•
Services are interrupted for 30
seconds or longer during a
device-by-device upgrade without
ISSU. Services are interrupted for
about 2 seconds during an ISSU
upgrade.
The DR member
devices are upgrade
separately, and the service interruption time is
shorter than 1 second during an upgrade.
If the software supports
graceful insertion and
removal (GIR), an upgrade does not interrupt
services.
Management
The IRF member devices are
configured and managed in a unified
manner.
S
ingle points of failure might occur
when a controller manages the IRF
member devices.
The DR member
devices are configured
separately, and they can perform configuration
consistency check for you to remove
inconsistencies in the configuration
that affects
operation of the DR system.
You must ensure
that service features also have consistent
configuration.
The DR member
devices are managed
separately. No single point of failure will occur
when a controller manages the DR member
devices.
2
NOTE:
GIR enables you to gracefully isolate
a
device from the network for device maintenance or upgrade.
GIR minimize
s service interruption by instructing the affected protocols (for example, routing
protocols) to isolate the device and switch over to the redundant path. You do not need to configure
graceful switchover protocol by protocol.
For more information about GIR, see Fundamentals
Configuration Guide for the devices.
Overlay network planning
HPE offers the following overlay network models for DRNI:
•
Three-tiered overlay—The overlay network is formed by the leaf, spine, and border tiers, as
shown in Figure 1. Use this model if the border devices do not have enough downlink
interfaces to connect to all leaf devices. The spine devices act as route reflectors (RR) in the
network.
•
Two-tiered overlay—The overlay network is formed by the leaf and spine tiers, and the spine
devices are also border devices, as shown in Figure 2.
3
Figure 1 Three-tiered overlay network model
Virtualization
server Bare metal
server
IPL
Leaf 1
Virtualization
server Bare metal
server
Leaf 2
Spine 1
BGP RR Spine 2
BGP RR
Border
devices/EDs IPL
FW
ECMP
ECMP
LB
IPL
Keepalive Keepalive
Keepalive
Border devices/EDs in
a remote DC
IPL
Keepalive
Internet
PE/Core 1PE/Core 2
ECMP
ECMP
4
Figure 2 Two-tiered overlay network model
The overlay network contains the following device roles:
•
Border device—A border gateway with DRNI configured. The border devices are attached to
firewalls and load balancers by using DR interfaces. The border devices use Layer 3 Ethernet
interfaces to connect to the spine or leaf devices, and traffic is load shared among the Layer 3
Ethernet links based on ECMP routes.
On a border device, you can use a Layer 3 Ethernet interface, VLAN interface, or DR interface
to establish Layer 3 connections with a PE or core device. As a best practice, use Layer 3
Ethernet interfaces.
•
Edge device (ED)—A device providing Layer 2 and Layer 3 connectivity to another data
center by using VXLAN. You can deploy independent EDs, configure EDs to be collocated with
border devices, or configure a device to act as an ED, border, and spine device.
•
Spine device—An RR does not have DRNI configuration and reflects BGP routes between
the border and leaf tiers in the three-tiered network model. An RR performs only underlay
forwarding, and ECMP routes are used for traffic load sharing among the spine, border, and
leaf tiers.
In a small network, spine devices can be collocated with border devices.
•
Leaf device—A DRNI-configured gateway for the servers. If the server NICs operate in bond4
mode for load sharing, a leaf device is connected to the servers by using DR interfaces. If the
server NICs operate in bond1 mode for link backup, a leaf device is connected to the servers
Virtualization
server Bare metal
server
IPL
Leaf 1
Virtualization
server Bare metal
server
Leaf 2
Border/Spine
devices
IPL
FW
LB
IPL
Keepalive Keepalive
Keepalive
EDs
IPL
Keepalive
Internet
PE/Core 1 PE/Core 2
Remote DC
ECMP
ECMP
ECMP
5
by using physical interfaces assigned to the same VLAN as the servers. As a best practice to
reduce active/standby NIC switchovers upon link flapping, disable active link preemption or set
a preemption delay.
For high availability, make sure the servers are dualhomed to the leaf devices.
A leaf device is connected to upstream devices by using Layer 3 Ethernet interfaces, and
ECMP routes are configured for high availability and load sharing.
•
Firewall (FW)—An internal firewall attached to the DR interfaces on the border devices by
using two aggregate interfaces, one for the uplink and one for the downlink. Static routes are
configured to enable Layer 3 communication between the firewall and border devices.
•
Load balancer (LB)—A load balancer attached to the DR interfaces on the border devices by
using an aggregate interface. Static routes are configured to enable Layer 3 communication
between the load balancer and border devices.
Underlay network planning
HPE offers the following underlay network models:
•
DRNI at the spine and leaf tiers—If the network is large, set up DR systems at the spine and
leaf tiers, and configure the spine devices as gateways for the servers. For configuration
examples, see Multi-tier DRNI+Spine Gateways+ECMP Paths to External Network
Configuration Example.
•
DRNI at the leaf tier—If the network is small, set up DR systems at the leaf tier, and configure
the leaf devices as gateways for the servers. Configure ECMP routes between the leaf and
spine tiers.
6
Figure 3 DRNI at the spine and leaf tiers
7
Figure 4 DRNI at the leaf tier
Restrictions and guidelines for DR system setup
IPL
In addition to protocol packets, the IPL also transmits data packets between the DR member
devices when an uplink fails.
If a DR member device is a modular device, assign at least one port on each slot to the aggregation
group for the IPP as a best practice. This configuration prevents asynchronous service module
reboots from causing IPL flapping after a device reboot. As a best practice, make sure at least one
member port resides on a different slot than the uplink interfaces.
If a DR member device is a fixed-port device with interface expansion modules, assign ports from
multiple interface expansion modules to the aggregation group for the IPP. As a best practice, make
sure at least one member port resides on a different interface expansion module than the uplink
interfaces.
If a DR member device is a fixed-port device, assign at least two physical interfaces to the
aggregation group for the IPP.
Make sure the member ports in the aggregation group for the IPP have the same speed.
If a leaf-tier DR system is attached to a large number of servers whose NICs operate in
active/standby mode, take the size of the traffic sent among those servers into account when you
determine the bandwidth of the IPL.
Server Server
IPL
Leaf 1
(gateway)
Server Server
Leaf 2
(gateway)
Spine 1
IPL
Keepalive Keepalive
Internet
PE/Core 1 PE/Core 2
ECMP
ECMP
Spine 2
8
As a best practice to reduce the impact of interface flapping on upper-layer services, use the
link-delay command to configure the same link delay settings on the IPPs. Do not set the link
delay to 0.
To prevent data synchronization failure, you must set the same maximum jumbo frame length on
the IPPs of the DR member devices by using the jumboframe enable command.
Keepalive link
The DR member devices exchange keepalive packets over the keepalive link to detect multi-active
collisions when the IPL is down.
As a best practice, establish a dedicated direct link between two DR member devices as a
keepalive link. Do not use the keepalive link for any other purposes. Make sure the DR member
devices have Layer 2 and Layer 3 connectivity to each other over the keepalive link.
You can use management Ethernet interfaces, Layer 3 Ethernet interfaces, Layer 3 aggregate
interfaces, or interfaces with a VPN instance bound to set up the keepalive link. As a best practice,
do not use VLAN interfaces for keepalive link setup. If you have to use VLAN interfaces, remove the
IPPs from the related VLANs to avoid loops.
If a device has multiple management Ethernet interfaces, you can select one from them to set up a
dedicated keepalive link independent of the management network.
On a modular device or fixed-port device with interface expansion modules, do not use the same
module to provide interfaces for setting up the keepalive link and IPL.
For correct keepalive detection, you must exclude the physical and logical interfaces used for
keepalive detection from the shutdown action by DRNI MAD.
DR interface
DR interfaces in the same DR group must use the different LACP system MAC addresses.
As a best practice, use the undo lacp period command to enable the long LACP timeout timer
(90 seconds) on a DR system.
You must execute the lacp edge-port command on the DR interfaces attached to bare metal
servers.
DRNI MAD
Follow these restrictions and guidelines when you exclude interfaces from the shutdown action by
DRNI MAD on the underlay network:
•
By default, DRNI MAD shuts down network interfaces after a DR system splits.
•
You must exclude the VLAN interfaces of the VLANs to which the DR interfaces and IPPs
belong.
•
For correct keepalive detection, you must exclude the interfaces used for keepalive detection.
•
Do not exclude the uplink Layer 3 interfaces, VLAN interfaces, or physical interfaces.
When you use EVPN in conjunction with DRNI, follow these restrictions and guidelines:
•
Set the default DRNI MAD action to NONE by using the drni mad default-action none
command.
•
Do not configure the DRNI MAD action on the VLAN interfaces of the VLANs to which the DR
interfaces and IPPs belong. These interfaces will not be shut down by DRNI MAD. Use the
drni mad include interface command to include the non-DR interfaces attached to
single-homed servers in the shutdown action by DRNI MAD. These interfaces will be shut
down by DRNI MAD when the DR system splits.
•
Do not configure the DRNI MAD action on aggregation member ports. These interfaces will be
shut down by DRNI MAD after a DR system splits.
•
If you use an Ethernet aggregate link as an IPL, add the uplink Layer 3 interfaces, VLAN
interfaces, and physical interfaces to the list of included interfaces by using the drni mad
9
include interface command. These interfaces will be shut down by DRNI MAD. This
restriction does not apply to a VXLAN tunnel IPL.
•
Do not configure the DRNI MAD action on the interfaces used by EVPN, including the VSI
interfaces, interfaces that provide BGP peer addresses, and interfaces used for setting up the
keepalive link. These interfaces will not be shut down by DRNI MAD.
•
Do not configure the DRNI MAD action on the interface that provides the IP address specified
by using the evpn drni group command. These interfaces will not be shut down by DRNI
MAD.
When you configure DRNI MAD, use either of the following methods:
•
To shut down all network interfaces on the secondary DR member device except a few
special-purpose interfaces that must be retained in up state:
ï‚¡ Set the default DRNI MAD action to DRNI MAD DOWN by using the drni mad
default-action down command.
ï‚¡ Exclude interfaces from being shut down by DRNI MAD by using the drni mad exclude
interface command.
In some scenarios, you must retain a large number of logical interfaces (for example, VSI
interfaces, VLAN interfaces, aggregate interfaces, tunnel interfaces, and loopback interfaces)
in up state. To simplify configuration, you can exclude all logical interfaces from the shutdown
action by DRNI MAD by using the drni mad exclude logical-interfaces command.
•
To have the secondary DR member device retain a large number of interfaces in up state and
shut down the remaining interfaces:
ï‚¡ Set the default DRNI MAD action to NONE by using the drni mad default-action
none command.
ï‚¡ Specify network interfaces that must be shut down by DRNI MAD by using the drni mad
include interface command.
If you configure inter-VPN static routes without a next hop in ADDC 6.2 or a later solution, you must
perform the following tasks for the static routes to take effect:
1. Create a service loopback group, and then assign an interface to it.
2. Access the DR system editing page and exclude that interface from the shutdown action by
DRNI MAD.
Restrictions and guidelines
DRNI compatibility with third-party devices
You cannot use DR interfaces for communicating with third-party devices.
DR system configuration
You can assign two member devices to a DR system. For the DR member devices to be identified
as one DR system, you must configure the same DR system MAC address and DR system priority
on them. You must assign different DR system numbers to the DR member devices.
Make sure each DR system uses a unique DR system MAC address.
To ensure correct forwarding, delete DRNI configuration from a DR member device if it leaves its
DR system.
When you bulk shut down physical interfaces on a DR member device for service changes or
hardware replacement, shut down the physical interfaces used for keepalive detection prior to the
physical member ports of the IPP. If you fail to do so, link flapping will occur on the member ports of
DR interfaces.
10
Do not execute the drni drcp period short command to enable the short DRCP timeout timer
when the DRNI process is restarting or before you perform an ISSU. If you do so, traffic forwarding
will be interrupted during the DRNI process restart or ISSU.
DRNI standalone mode
The DR member devices might both operate with the primary role to forward traffic if they have DR
interfaces in up state after the DR system splits. DRNI standalone mode helps avoid traffic
forwarding issues in this multi-active situation by allowing only the member ports in the DR
interfaces on one member device to forward traffic.
The following information describes the operating mechanism of this feature.
The DR member devices change to DRNI standalone mode when they detect that both the IPL and
the keepalive link are down. In addition, the secondary DR member device changes its role to
primary.
In DRNI standalone mode, the LACPDUs sent out of a DR interface by each DR member device
contain the interface-specific LACP system MAC address and LACP system priority.
The Selected state of the member ports in the DR interfaces in a DR group depends on their LACP
system MAC address and LACP system priority. If a DR interface has a lower LACP system priority
value or LACP system MAC address, the member ports in that DR interface become Selected to
forward traffic. If those Selected ports fail, the member ports in the DR interface on the other DR
member device become Selected to forward traffic.
To configure the DR system priority, use the drni system-priority command in system view.
To configure the LACP system priority, use one of the following methods:
•
Execute the lacp system-mac and lacp system-priority commands in system view.
•
Execute the port lacp system-mac and port lacp system-priority commands in DR
interface view.
The DR interface-specific configuration takes precedence over the global configuration.
When you configure the DR system priority and LACP system priority, follow these guidelines:
•
For a single tier of DR system at the leaf layer, set the DR system priority value to be larger
than the LACP system priority value for DR interfaces. The smaller the value, the higher the
priority. For a DR group, configure different LACP system priority values for the member DR
interfaces.
•
For two tiers of DR systems at the spine and leaf layers, configure the DR system priority
settings of spine devices to be the same as the LACP system priority settings of leaf devices.
This ensures traffic is forwarded along the correct path when a DR system splits.
IPP configuration
To ensure correct Layer 3 forwarding over the IPL, you must execute the undo mac-address
static source-check enable command to disable static source check on the Layer 2
aggregate interface assigned the IPP role. This restriction does not apply to the HPE FlexFabric
12900E switches.
DRNI data restoration interval
The data restoration interval set by using the drni restore-delay command specifies the
maximum amount of time for the secondary DR member device to synchronize forwarding entries
with the primary DR member device during DR system setup. Adjust the data restoration interval
based on the size of forwarding tables. If the DR member devices have small forwarding tables,
reduce this interval. If the forwarding tables are large, increase this interval. Typically, set the data
restoration interval to 300 seconds. If the ARP table of an HPE FlexFabric 12900E switch contains
about 48K entries, set this interval to 900 seconds.
IRF
The HPE FlexFabric 12900E Switch Series (Type K) do not support IRF.
11
DRNI is not supported by an IRF member device, even when the device is the only member in an
IRF fabric. Before you configure DRNI on a device, verify that it is operating in standalone mode.
MDC
Only the HPE FlexFabric 12900E Switch Series (Type X) support MDC.
You cannot use DRNI on MDCs.
GIR
Before you change a DR member device back to normal mode, execute the display drni mad
verbose command to verify that no network interfaces are in DRNI MAD DOWN state.
MAC address table
If the DR system has a large number of MAC address entries, set the MAC aging timer to a higher
value than 20 minutes as a best practice. To set the MAC aging timer, use the mac-address
timer aging command.
The MAC address learning feature is not configurable on the IPP. Do not execute the
mac-address mac-learning enable or undo mac-address mac-learning enable
command on the IPP.
ARP
If a DR interface provides Layer 3 services, a VLAN interface is configured for the VLAN that
contains the DR interface for example, do not configure the following features on the DR interface:
•
ARP active acknowledgement, configurable with the arp active-ack enable command.
•
Dynamic ARP learning limit, configurable with the arp max-learning-number command.
This restriction ensures that the DR member devices can learn consistent ARP entries.
Link aggregation
Do not configure automatic link aggregation on a DR system.
The aggregate interfaces in an S-MLAG group cannot be used as DR interfaces or IPPs.
You cannot configure link aggregation management subnets on a DR system.
When you configure a DR interface, follow these restrictions and guidelines:
•
The link-aggregation selected-port maximum and link-aggregation
selected-port minimum commands do not take effect on a DR interface.
•
If you execute the display link-aggregation verbose command for a DR interface, the
displayed system ID contains the DR system MAC address and the DR system priority.
•
If the reference port is a member port of a DR interface, the display link-aggregation
verbose command displays the reference port on both DR member devices.
Port isolation
Do not assign DR interfaces and IPPs to the same port isolation group.
CFD
Do not use the MAC address of a remote MEP for CFD tests on IPPs. These tests cannot work on
IPPs.
Smart Link
The DR member devices in a DR system must have the same Smart Link configuration.
For Smart Link to operate correctly on a DR interface, do not assign the DR interface and non-DR
interfaces to the same smart link group.
Do not assign an IPP to a smart link group.
12
You can use Smart Link on a DR system formed by the following device models:
•
HPE FlexFabric 5944 switches.
•
HPE FlexFabric 5945 switches.
•
HPE FlexFabric 12900E Switch Series.
Mirroring
If you use port mirroring together with DRNI, do not assign the source port, destination port, egress
port, and reflector port for a mirroring group to two aggregation groups. If the source port is in a
different aggregation group than the other ports, mirrored LACPDUs will be transmitted between
aggregation groups and cause aggregate interface flapping.
MAC address synchronization
Two DR member devices synchronize underlay MAC address entries over the IPL and overlay MAC
address entries through BGP EVPN.
Only the MAC address entries learned by hardware age out. Synchronized MAC address entries do
not age out. If a hardware-learned MAC address entry ages out on one DR member device, the
device requests the other DR member device to delete that MAC address entry.
13
DRNI network models
Layer 2 DRNI network models
Loop prevention on a DR system
For a DR system on an underlay network, configure spanning tree to remove loops. For a DR
system on an overlay network, configure VSI-based loop detection to remove loops.
DRNI and spanning tree
Network model
You can use DRNI in conjunction with spanning tree to remove loops, as shown in Figure 5 and
Table 2.
Figure 5 Network diagram
Table 2 Deployment schemes
Scenario
Solution
Commands
Due to a DR system split,
misconnection, or
Enable spanning tree on the DR
member devices. stp global enable (system
view)
STP
Spanning tree
edge port
DR 1 DR 2
DR 3
IPL
DR 4
IPL
Server
Multichassis
aggregate link Inter-DR system
aggregate link
Keepalive
Devices not participating in
spanning tree calculation
Keepalive
14
Scenario
Solution
Commands
misconfiguration, traffic is sent
between two member ports of
the same aggregation group
over the IPL
, which creates a
loop.
Assign the spine-facing interfaces
on leaf devices to different VLANs
if the leaf and spine devices are
interconnected by using VLAN
interfaces in an EVPN distributed
relay
network. In addition, disable
spanning tree on physical
interface
s to remove loops and
prevent the upstream device from
falsely blocking interfaces.
undo stp enable (
Layer 2
Ethernet interface view)
A new device added to the
network preempts the root
bridge role, and network
flapping occurs as a result.
Configure the DR member devices
in the upstream DR system as root
bridges and enable root guard on
them.
stp root primary (system view)
stp root-protection (DR
interface view)
The DR member devices are
attacked by using TC-BPDUs
and flush MAC address entries
frequently, which causes
network flapping, high CPU
usage, and transient floods.
Enable the TC-
BPDU guard
feature on the DR member
devices.
stp tc-protection (system
view)
On a DR member
device, an
interface cannot recognize
BPDUs after its physical state
changes.
Configure an interface as an edge
port if
its peer port does not
support or run spanning tree
protocols.
stp edged-port (DR interface
view)
Network flapping occurs after a
DR member
device receives
forged BPDUs on interfaces
whose counterparts
do not
send BPDUs.
Enable BPDU guard on the DR
member device. When interfaces
with BPDU guard enabled receive
configuration BPDUs, the device
performs the following operations:
• Shuts down these interfaces.
•
Notifies the NMS that these
interface
s have been shut
down by the spanning tree
protocol.
The device re
activates the
interface
s that have been shut
down when the port status
detection timer expires.
stp bpdu-protection (system
view)
Restrictions and guidelines
Make sure the DR member devices in a DR system have the same spanning tree configuration.
Violation of this rule might cause network flapping. The configuration includes:
•
Global spanning tree configuration.
•
Spanning tree configuration on the IPP.
•
Spanning tree configuration on DR interfaces.
IPPs of the DR system do not participate in spanning tree calculation.
The DR member devices still use the DR system MAC address after the DR system splits, which
will cause spanning tree calculation issues. To avoid the issues, enable DRNI standalone mode on
the DR member devices before the DR system splits.
Spanning tree configurations made in system view take effect globally. Spanning tree configurations
made in Layer 2 Ethernet interface view take effect only on the interface. Spanning tree
configurations made in Layer 2 aggregate interface view take effect only on the aggregate interface.
15
Spanning tree configurations made on an aggregation member port can take effect only after the
port is removed from its aggregation group.
After you enable a spanning tree protocol on a Layer 2 aggregate interface, the system performs
spanning tree calculation on the Layer 2 aggregate interface. It does not perform spanning tree
calculation on the aggregation member ports. The spanning tree protocol state and forwarding state
of each selected member port are consistent with those of the corresponding Layer 2 aggregate
interface. The member ports of an aggregation group do not participate in spanning tree calculation.
However, the ports still reserve their spanning tree configurations for participating in spanning tree
calculation after leaving the aggregation group.
DRNI and VSI-based loop detection
Mechanisms
As shown in Figure 6, if an endpoint is dualhomed to the DR system, you must enable loop
detection on both VTEPs in the DR system. Loop detection works as follows on the VTEPs:
1. The VTEPs send loop detection frames out of the ACs configured on the DR interfaces facing
the endpoint. The loop detection frames contain the same source MAC address, VLAN tag,
loop detection interval, and loop detection priority. The source MAC address is the DR system
MAC address.
2. When receiving loop detection frames on a local DR interface, a VTEP sends the loop
detection frames to the peer VTEP over the IPL. This synchronization mechanism ensures that
a VTEP can receive loop detection frames in case of link or interface failure.
3. If a VTEP receives a self-sent loop detection frame from an AC, the VTEP compares the loop
detection priority of the AC with that in the frame and acts as follows:
ï‚¡ If the loop detection priority in the frame is higher, the VTEP performs the loop protection
action on all ACs configured for the DR group that accommodates the looped DR interface.
ï‚¡ If the loop detection priority of the AC is higher, the system only records the loop
information.
If an endpoint is singlehomed to one VTEP in the DR system, enable loop detection only on that
VTEP. Loop detection works as follows on the VTEP:
4. The VTEP sends loop detection frames out of the ACs configured on the DR interface facing
the endpoint. The source MAC address is the DR system MAC address.
5. If the VTEP receives a self-sent loop detection frame from an AC, the VTEP compares the
loop detection priority of the AC with that in the frame and acts as follows:
ï‚¡ If the loop detection priority in the frame is higher, the VTEP performs the loop protection
action on the looped AC.
ï‚¡ If the loop detection priority of the AC is higher, the system only records the loop
information.
16
Figure 6 Loop detection in a VXLAN network with DRNI configured
Compatibility of data center switches with VSI-based loop detection
Hardware
Software
Reference
HPE FlexFabric 12900E
Switch Series (Type K) R5210 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 12900E Switch Series
Configuration Guides-R52xx.
HPE FlexFabric 12900E
Switch Series (Type X) R7624P08 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 12900E Switch Series
Configuration Guides-R762X.
HPE FlexFabric 5944 &
5945 Switch Series R6710 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 5944 & 5945
Configuration Guides-Release 671x.
HPE FlexFabric 5940
Switch Series R6710 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 5940 Configuration
Guides-Release 671x.
HPE FlexFabric 5710
Switch Series R6710 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 5710 Configuration
Guides-Release 671x.
Restrictions and guidelines
If you enable loop detection in an EVPN VXLAN DRNI dualhoming environment, configure the
same loop detection parameters for the VTEPs in the DR system.
L2 Network
L2 Network
VTEP
VTEP
VTEP
IPL
Agg1
Agg1
Server 3 Server 4
Server 1 Server 2
Site 1 Site 2
Site 3 Site 4
Transport network
17
Layer 3 DRNI network models
Gateway deployment schemes
Table 3 shows the schemes to configure gateways on a DR system for attached servers.
Table 3 Gateway deployment schemes for DRNI
Gateway type
Description
VLAN interface (recommended)
• A VLAN interface is configured on each DR member device,
and both DR member devices can respond to ARP packets and
perform Layer 3 forwarding.
• Attached servers require Layer 3 connectivity to the DR system
in some scenarios, containers are deployed on the servers for
example. To fulfil this requirement, perform one of the following
tasks:
ï‚¡ Configure static routes.
ï‚¡ Assign a virtual IPv4 or IPv6 address to each gateway VLAN
interface by using the port drni virtual-ip or port
drni ipv6
virtual-ip
command.
VRRP group
•
Both the VRRP master and backup devices perform Layer 3
forwarding, but only the master device responds to ARP
packets.
In a VRRP dual-
active scenario, a gateway locally forwards a
packet at Layer 3 if the packet is destined for the VRRP virtual
MAC
address, real MAC address of the local device, or real
MAC address of the DR peer. The DR member devices
synchronize the real MAC addresses of the gateways with each
other.
• The server-side devices can set up dynamic routing neighbor
relationships with the DR member devices.
For more information about support for dual-active VLAN interfaces, see the applicable product
matrix in DRNI+IPv4 and IPv6 Dual-Active VLAN Gateway Configuration Example.
Dual-active VLAN interfaces
About dual-active VLAN interfaces
Configure VLAN interfaces as gateways on both DR member devices, as shown in Figure 7 and
Table 4.
For more information about configuring dual-active VLAN interfaces, see DRNI+IPv4 and IPv6
Dual-Active VLAN Gateway Configuration Example.
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68
  • Page 69 69
  • Page 70 70
  • Page 71 71
  • Page 72 72
  • Page 73 73
  • Page 74 74
  • Page 75 75
  • Page 76 76
  • Page 77 77
  • Page 78 78
  • Page 79 79
  • Page 80 80
  • Page 81 81
  • Page 82 82
  • Page 83 83
  • Page 84 84
  • Page 85 85
  • Page 86 86
  • Page 87 87
  • Page 88 88
  • Page 89 89
  • Page 90 90
  • Page 91 91
  • Page 92 92
  • Page 93 93
  • Page 94 94
  • Page 95 95
  • Page 96 96
  • Page 97 97
  • Page 98 98
  • Page 99 99
  • Page 100 100
  • Page 101 101
  • Page 102 102
  • Page 103 103
  • Page 104 104
  • Page 105 105
  • Page 106 106
  • Page 107 107
  • Page 108 108
  • Page 109 109
  • Page 110 110
  • Page 111 111
  • Page 112 112
  • Page 113 113
  • Page 114 114
  • Page 115 115
  • Page 116 116
  • Page 117 117
  • Page 118 118
  • Page 119 119
  • Page 120 120
  • Page 121 121
  • Page 122 122
  • Page 123 123
  • Page 124 124
  • Page 125 125
  • Page 126 126
  • Page 127 127
  • Page 128 128
  • Page 129 129
  • Page 130 130
  • Page 131 131
  • Page 132 132
  • Page 133 133
  • Page 134 134
  • Page 135 135
  • Page 136 136
  • Page 137 137
  • Page 138 138
  • Page 139 139
  • Page 140 140
  • Page 141 141
  • Page 142 142
  • Page 143 143
  • Page 144 144
  • Page 145 145
  • Page 146 146
  • Page 147 147
  • Page 148 148
  • Page 149 149
  • Page 150 150
  • Page 151 151
  • Page 152 152
  • Page 153 153
  • Page 154 154
  • Page 155 155
  • Page 156 156
  • Page 157 157
  • Page 158 158
  • Page 159 159
  • Page 160 160
  • Page 161 161
  • Page 162 162
  • Page 163 163
  • Page 164 164
  • Page 165 165
  • Page 166 166
  • Page 167 167
  • Page 168 168
  • Page 169 169
  • Page 170 170
  • Page 171 171
  • Page 172 172
  • Page 173 173
  • Page 174 174
  • Page 175 175
  • Page 176 176
  • Page 177 177
  • Page 178 178
  • Page 179 179
  • Page 180 180
  • Page 181 181
  • Page 182 182
  • Page 183 183
  • Page 184 184
  • Page 185 185
  • Page 186 186
  • Page 187 187
  • Page 188 188
  • Page 189 189
  • Page 190 190
  • Page 191 191
  • Page 192 192
  • Page 193 193
  • Page 194 194
  • Page 195 195
  • Page 196 196
  • Page 197 197
  • Page 198 198
  • Page 199 199
  • Page 200 200
  • Page 201 201
  • Page 202 202
  • Page 203 203
  • Page 204 204
  • Page 205 205
  • Page 206 206
  • Page 207 207
  • Page 208 208
  • Page 209 209
  • Page 210 210
  • Page 211 211
  • Page 212 212
  • Page 213 213
  • Page 214 214
  • Page 215 215
  • Page 216 216
  • Page 217 217
  • Page 218 218
  • Page 219 219
  • Page 220 220
  • Page 221 221
  • Page 222 222
  • Page 223 223
  • Page 224 224
  • Page 225 225
  • Page 226 226
  • Page 227 227
  • Page 228 228
  • Page 229 229
  • Page 230 230
  • Page 231 231
  • Page 232 232
  • Page 233 233
  • Page 234 234
  • Page 235 235
  • Page 236 236
  • Page 237 237
  • Page 238 238
  • Page 239 239
  • Page 240 240
  • Page 241 241
  • Page 242 242
  • Page 243 243
  • Page 244 244
  • Page 245 245
  • Page 246 246
  • Page 247 247
  • Page 248 248
  • Page 249 249
  • Page 250 250
  • Page 251 251
  • Page 252 252
  • Page 253 253
  • Page 254 254
  • Page 255 255
  • Page 256 256
  • Page 257 257
  • Page 258 258
  • Page 259 259
  • Page 260 260
  • Page 261 261
  • Page 262 262
  • Page 263 263
  • Page 264 264
  • Page 265 265
  • Page 266 266
  • Page 267 267
  • Page 268 268
  • Page 269 269
  • Page 270 270
  • Page 271 271
  • Page 272 272
  • Page 273 273
  • Page 274 274
  • Page 275 275
  • Page 276 276
  • Page 277 277
  • Page 278 278
  • Page 279 279
  • Page 280 280
  • Page 281 281
  • Page 282 282
  • Page 283 283
  • Page 284 284
  • Page 285 285
  • Page 286 286
  • Page 287 287
  • Page 288 288
  • Page 289 289
  • Page 290 290
  • Page 291 291
  • Page 292 292
  • Page 293 293
  • Page 294 294
  • Page 295 295
  • Page 296 296
  • Page 297 297
  • Page 298 298
  • Page 299 299
  • Page 300 300
  • Page 301 301
  • Page 302 302
  • Page 303 303
  • Page 304 304
  • Page 305 305
  • Page 306 306
  • Page 307 307
  • Page 308 308
  • Page 309 309
  • Page 310 310
  • Page 311 311
  • Page 312 312
  • Page 313 313
  • Page 314 314
  • Page 315 315
  • Page 316 316
  • Page 317 317
  • Page 318 318
  • Page 319 319
  • Page 320 320
  • Page 321 321
  • Page 322 322
  • Page 323 323
  • Page 324 324
  • Page 325 325
  • Page 326 326
  • Page 327 327
  • Page 328 328
  • Page 329 329
  • Page 330 330
  • Page 331 331
  • Page 332 332
  • Page 333 333
  • Page 334 334
  • Page 335 335
  • Page 336 336
  • Page 337 337
  • Page 338 338
  • Page 339 339
  • Page 340 340
  • Page 341 341
  • Page 342 342
  • Page 343 343
  • Page 344 344
  • Page 345 345
  • Page 346 346
  • Page 347 347
  • Page 348 348
  • Page 349 349
  • Page 350 350
  • Page 351 351
  • Page 352 352
  • Page 353 353
  • Page 354 354
  • Page 355 355
  • Page 356 356
  • Page 357 357
  • Page 358 358
  • Page 359 359
  • Page 360 360
  • Page 361 361
  • Page 362 362
  • Page 363 363
  • Page 364 364
  • Page 365 365
  • Page 366 366
  • Page 367 367
  • Page 368 368
  • Page 369 369
  • Page 370 370
  • Page 371 371
  • Page 372 372
  • Page 373 373
  • Page 374 374
  • Page 375 375
  • Page 376 376
  • Page 377 377
  • Page 378 378
  • Page 379 379
  • Page 380 380
  • Page 381 381
  • Page 382 382
  • Page 383 383
  • Page 384 384
  • Page 385 385
  • Page 386 386
  • Page 387 387
  • Page 388 388
  • Page 389 389
  • Page 390 390
  • Page 391 391
  • Page 392 392
  • Page 393 393
  • Page 394 394
  • Page 395 395
  • Page 396 396
  • Page 397 397
  • Page 398 398
  • Page 399 399
  • Page 400 400
  • Page 401 401
  • Page 402 402
  • Page 403 403
  • Page 404 404
  • Page 405 405
  • Page 406 406
  • Page 407 407
  • Page 408 408
  • Page 409 409
  • Page 410 410
  • Page 411 411
  • Page 412 412
  • Page 413 413
  • Page 414 414
  • Page 415 415
  • Page 416 416
  • Page 417 417
  • Page 418 418
  • Page 419 419
  • Page 420 420
  • Page 421 421
  • Page 422 422
  • Page 423 423
  • Page 424 424
  • Page 425 425
  • Page 426 426
  • Page 427 427
  • Page 428 428
  • Page 429 429
  • Page 430 430
  • Page 431 431
  • Page 432 432
  • Page 433 433
  • Page 434 434
  • Page 435 435
  • Page 436 436
  • Page 437 437
  • Page 438 438
  • Page 439 439
  • Page 440 440
  • Page 441 441
  • Page 442 442
  • Page 443 443
  • Page 444 444
  • Page 445 445
  • Page 446 446
  • Page 447 447
  • Page 448 448
  • Page 449 449
  • Page 450 450
  • Page 451 451
  • Page 452 452
  • Page 453 453
  • Page 454 454
  • Page 455 455
  • Page 456 456
  • Page 457 457
  • Page 458 458
  • Page 459 459
  • Page 460 460
  • Page 461 461
  • Page 462 462
  • Page 463 463
  • Page 464 464
  • Page 465 465
  • Page 466 466
  • Page 467 467
  • Page 468 468
  • Page 469 469
  • Page 470 470
  • Page 471 471
  • Page 472 472
  • Page 473 473
  • Page 474 474
  • Page 475 475
  • Page 476 476
  • Page 477 477
  • Page 478 478
  • Page 479 479
  • Page 480 480
  • Page 481 481
  • Page 482 482
  • Page 483 483
  • Page 484 484
  • Page 485 485
  • Page 486 486
  • Page 487 487
  • Page 488 488
  • Page 489 489
  • Page 490 490
  • Page 491 491
  • Page 492 492
  • Page 493 493
  • Page 494 494
  • Page 495 495
  • Page 496 496
  • Page 497 497
  • Page 498 498
  • Page 499 499
  • Page 500 500
  • Page 501 501
  • Page 502 502
  • Page 503 503
  • Page 504 504
  • Page 505 505
  • Page 506 506
  • Page 507 507
  • Page 508 508
  • Page 509 509
  • Page 510 510
  • Page 511 511
  • Page 512 512
  • Page 513 513
  • Page 514 514
  • Page 515 515
  • Page 516 516
  • Page 517 517
  • Page 518 518
  • Page 519 519
  • Page 520 520
  • Page 521 521
  • Page 522 522
  • Page 523 523
  • Page 524 524
  • Page 525 525
  • Page 526 526
  • Page 527 527
  • Page 528 528
  • Page 529 529
  • Page 530 530
  • Page 531 531
  • Page 532 532
  • Page 533 533
  • Page 534 534
  • Page 535 535
  • Page 536 536
  • Page 537 537
  • Page 538 538
  • Page 539 539
  • Page 540 540
  • Page 541 541
  • Page 542 542
  • Page 543 543
  • Page 544 544
  • Page 545 545
  • Page 546 546
  • Page 547 547
  • Page 548 548
  • Page 549 549
  • Page 550 550
  • Page 551 551
  • Page 552 552
  • Page 553 553
  • Page 554 554
  • Page 555 555
  • Page 556 556
  • Page 557 557
  • Page 558 558
  • Page 559 559
  • Page 560 560
  • Page 561 561
  • Page 562 562
  • Page 563 563
  • Page 564 564
  • Page 565 565
  • Page 566 566
  • Page 567 567
  • Page 568 568
  • Page 569 569
  • Page 570 570
  • Page 571 571
  • Page 572 572
  • Page 573 573
  • Page 574 574
  • Page 575 575
  • Page 576 576
  • Page 577 577
  • Page 578 578
  • Page 579 579
  • Page 580 580
  • Page 581 581
  • Page 582 582
  • Page 583 583
  • Page 584 584
  • Page 585 585
  • Page 586 586
  • Page 587 587
  • Page 588 588
  • Page 589 589
  • Page 590 590
  • Page 591 591
  • Page 592 592
  • Page 593 593
  • Page 594 594
  • Page 595 595
  • Page 596 596
  • Page 597 597
  • Page 598 598
  • Page 599 599
  • Page 600 600
  • Page 601 601
  • Page 602 602
  • Page 603 603
  • Page 604 604
  • Page 605 605
  • Page 606 606
  • Page 607 607
  • Page 608 608
  • Page 609 609
  • Page 610 610
  • Page 611 611
  • Page 612 612
  • Page 613 613
  • Page 614 614
  • Page 615 615
  • Page 616 616
  • Page 617 617
  • Page 618 618
  • Page 619 619
  • Page 620 620
  • Page 621 621
  • Page 622 622
  • Page 623 623
  • Page 624 624
  • Page 625 625
  • Page 626 626
  • Page 627 627
  • Page 628 628
  • Page 629 629
  • Page 630 630
  • Page 631 631
  • Page 632 632
  • Page 633 633
  • Page 634 634
  • Page 635 635
  • Page 636 636
  • Page 637 637
  • Page 638 638
  • Page 639 639
  • Page 640 640
  • Page 641 641
  • Page 642 642
  • Page 643 643
  • Page 644 644
  • Page 645 645
  • Page 646 646
  • Page 647 647
  • Page 648 648
  • Page 649 649
  • Page 650 650
  • Page 651 651
  • Page 652 652
  • Page 653 653
  • Page 654 654
  • Page 655 655
  • Page 656 656
  • Page 657 657
  • Page 658 658
  • Page 659 659
  • Page 660 660
  • Page 661 661
  • Page 662 662
  • Page 663 663
  • Page 664 664
  • Page 665 665
  • Page 666 666
  • Page 667 667
  • Page 668 668
  • Page 669 669
  • Page 670 670
  • Page 671 671
  • Page 672 672
  • Page 673 673
  • Page 674 674
  • Page 675 675
  • Page 676 676
  • Page 677 677
  • Page 678 678
  • Page 679 679
  • Page 680 680
  • Page 681 681
  • Page 682 682
  • Page 683 683
  • Page 684 684
  • Page 685 685
  • Page 686 686
  • Page 687 687
  • Page 688 688
  • Page 689 689
  • Page 690 690
  • Page 691 691
  • Page 692 692
  • Page 693 693
  • Page 694 694
  • Page 695 695
  • Page 696 696
  • Page 697 697
  • Page 698 698
  • Page 699 699
  • Page 700 700
  • Page 701 701
  • Page 702 702
  • Page 703 703
  • Page 704 704
  • Page 705 705
  • Page 706 706
  • Page 707 707
  • Page 708 708
  • Page 709 709
  • Page 710 710
  • Page 711 711
  • Page 712 712
  • Page 713 713
  • Page 714 714
  • Page 715 715
  • Page 716 716
  • Page 717 717
  • Page 718 718
  • Page 719 719
  • Page 720 720
  • Page 721 721
  • Page 722 722
  • Page 723 723
  • Page 724 724
  • Page 725 725
  • Page 726 726
  • Page 727 727
  • Page 728 728
  • Page 729 729
  • Page 730 730
  • Page 731 731
  • Page 732 732
  • Page 733 733
  • Page 734 734
  • Page 735 735
  • Page 736 736
  • Page 737 737
  • Page 738 738
  • Page 739 739
  • Page 740 740
  • Page 741 741
  • Page 742 742
  • Page 743 743
  • Page 744 744
  • Page 745 745
  • Page 746 746
  • Page 747 747
  • Page 748 748
  • Page 749 749
  • Page 750 750
  • Page 751 751
  • Page 752 752
  • Page 753 753
  • Page 754 754
  • Page 755 755
  • Page 756 756
  • Page 757 757
  • Page 758 758
  • Page 759 759
  • Page 760 760
  • Page 761 761
  • Page 762 762
  • Page 763 763
  • Page 764 764
  • Page 765 765
  • Page 766 766
  • Page 767 767
  • Page 768 768
  • Page 769 769
  • Page 770 770
  • Page 771 771
  • Page 772 772
  • Page 773 773
  • Page 774 774
  • Page 775 775
  • Page 776 776
  • Page 777 777
  • Page 778 778

HPE FlexFabric Switches DRNI Configuration Guide

Category
Networking
Type
Configuration Guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI