Aruba R9F19A Configuration Guide

Category
Network switches
Type
Configuration Guide
1
Contents
DRNI network planning ·················································································· 1
Comparison between IRF and DRNI ················································································································· 1
Overlay network planning··································································································································· 2
Underlay network planning································································································································· 5
Restrictions and guidelines for DR system setup······························································································· 7
Restrictions and guidelines ································································································································ 9
DRNI network models ·················································································· 13
Layer 2 DRNI network models ························································································································· 13
Loop prevention on a DR system ············································································································· 13
DRNI and spanning tree ··························································································································· 13
DRNI and VSI-based loop detection ········································································································ 15
Layer 3 DRNI network models ························································································································· 17
Gateway deployment schemes ················································································································ 17
Dual-active VLAN interfaces ···················································································································· 17
Routing neighbor relationship setup on dual-active VLAN interfaces using DRNI virtual IP addresses ·· 22
VRRP gateways ······································································································································· 23
Restrictions and guidelines for single-homed servers attached to non-DR interfaces ····························· 25
Restrictions and guidelines for routing configuration ··············································································· 26
DRNI and RDMA ·············································································································································· 26
Network model ········································································································································· 26
Restrictions and guidelines ······················································································································ 28
DRNI and EVPN··············································································································································· 28
Distributed gateway deployment ·············································································································· 29
Centralized gateway deployment ············································································································· 30
Failover between data centers ················································································································· 31
Basic configuration restrictions and guidelines ························································································ 31
Restrictions and guidelines for IPL ACs ··································································································· 33
MAC address configuration restrictions and guidelines ··········································································· 33
Leaf device configuration restrictions and guidelines ··············································································· 34
Border and ED configuration restrictions and guidelines ········································································· 34
Restrictions and guidelines for server access in active/standby mode ···················································· 34
Routing neighbor relationship setup on a DR system formed by distributed EVPN gateways ························ 34
DRNI, EVPN, and DHCP relay ························································································································ 35
About this deployment scheme ················································································································ 35
Restrictions and guidelines ······················································································································ 37
EVPN distributed relay, microsegmentation, and service chain······································································· 37
Network model ········································································································································· 37
DRNI and underlay multicast ··························································································································· 39
DRNI and MVXLAN·········································································································································· 39
DRNI and DCI ·················································································································································· 41
Management network design ··························································································································· 42
High availability for DRNI ············································································· 44
High availability of uplinks ································································································································ 44
High availability of leaf devices ························································································································ 44
High availability of border devices···················································································································· 47
Recommended hardware and software versions ········································· 50
1
DRNI network planning
Comparison between IRF and DRNI
The Intelligent Resilient Framework (IRF) technology is developed by HPE to virtualize multiple
physical devices at the same layer into one virtual fabric to provide data center class availability and
scalability. IRF virtualization technology offers processing power, interaction, unified management,
and uninterrupted maintenance of multiple devices.
Distributed Resilient Network Interconnect (DRNI) virtualizes two physical devices into one system
through multi-chassis link aggregation for device redundancy and traffic load sharing.
Table 1 shows the differences between IRF and DRNI. For high availability and short service
interruption during software upgrade, use DRNI. You cannot use IRF and DRNI in conjunction on
the same device.
Table 1 Comparison between IRF and DRNI
Item
IRF
DRNI
Control plane
The IRF member devices have a
unified control plane for central
management.
The IRF member devices
synchronize all forwarding
entries.
The control plane of the DR member
devices is separate.
The DR member devices synchronize
entries such as MAC, ARP, and ND entries.
Device
requirements
Hardware: The chips of the IRF
member devices must have the
same architecture, and typically
the IRF member devices are from
the same series.
Software: The IRF member
devices must run the same
software version.
Hardware: The DR member devices can be
different models.
Software: Some device models can run
different software versions when they act as
DR member devices.
Full support for
differe
nt software versions will be
implemented in the future.
Software
upgrade
The IRF member devices are
upgraded simultaneously or
separately. A separate upgrade is
complex.
Services are interrupted for 30
seconds or longer during a
device-by-device upgrade without
ISSU. Services are interrupted for
about 2 seconds during an ISSU
upgrade.
The DR member
devices are upgrade
separately, and the service interruption time is
shorter than 1 second during an upgrade.
If the software supports
graceful insertion and
removal (GIR), an upgrade does not interrupt
services.
Management
The IRF member devices are
configured and managed in a unified
manner.
S
ingle points of failure might occur
when a controller manages the IRF
member devices.
The DR member
devices are configured
separately, and they can perform configuration
consistency check for you to remove
inconsistencies in the configuration
that affects
operation of the DR system.
You must ensure
that service features also have consistent
configuration.
The DR member
devices are managed
separately. No single point of failure will occur
when a controller manages the DR member
devices.
2
NOTE:
GIR enables you to gracefully isolate
a
device from the network for device maintenance or upgrade.
GIR minimize
s service interruption by instructing the affected protocols (for example, routing
protocols) to isolate the device and switch over to the redundant path. You do not need to configure
graceful switchover protocol by protocol.
For more information about GIR, see Fundamentals
Configuration Guide for the devices.
Overlay network planning
HPE offers the following overlay network models for DRNI:
Three-tiered overlayThe overlay network is formed by the leaf, spine, and border tiers, as
shown in Figure 1. Use this model if the border devices do not have enough downlink
interfaces to connect to all leaf devices. The spine devices act as route reflectors (RR) in the
network.
Two-tiered overlayThe overlay network is formed by the leaf and spine tiers, and the spine
devices are also border devices, as shown in Figure 2.
3
Figure 1 Three-tiered overlay network model
Virtualization
server Bare metal
server
IPL
Leaf 1
Virtualization
server Bare metal
server
Leaf 2
Spine 1
BGP RR Spine 2
BGP RR
Border
devices/EDs IPL
FW
ECMP
ECMP
LB
IPL
Keepalive Keepalive
Keepalive
Border devices/EDs in
a remote DC
IPL
Keepalive
Internet
PE/Core 1PE/Core 2
ECMP
ECMP
4
Figure 2 Two-tiered overlay network model
The overlay network contains the following device roles:
Border deviceA border gateway with DRNI configured. The border devices are attached to
firewalls and load balancers by using DR interfaces. The border devices use Layer 3 Ethernet
interfaces to connect to the spine or leaf devices, and traffic is load shared among the Layer 3
Ethernet links based on ECMP routes.
On a border device, you can use a Layer 3 Ethernet interface, VLAN interface, or DR interface
to establish Layer 3 connections with a PE or core device. As a best practice, use Layer 3
Ethernet interfaces.
Edge device (ED)A device providing Layer 2 and Layer 3 connectivity to another data
center by using VXLAN. You can deploy independent EDs, configure EDs to be collocated with
border devices, or configure a device to act as an ED, border, and spine device.
Spine deviceAn RR does not have DRNI configuration and reflects BGP routes between
the border and leaf tiers in the three-tiered network model. An RR performs only underlay
forwarding, and ECMP routes are used for traffic load sharing among the spine, border, and
leaf tiers.
In a small network, spine devices can be collocated with border devices.
Leaf deviceA DRNI-configured gateway for the servers. If the server NICs operate in bond4
mode for load sharing, a leaf device is connected to the servers by using DR interfaces. If the
server NICs operate in bond1 mode for link backup, a leaf device is connected to the servers
Virtualization
server Bare metal
server
IPL
Leaf 1
Virtualization
server Bare metal
server
Leaf 2
Border/Spine
devices
IPL
FW
LB
IPL
Keepalive Keepalive
Keepalive
EDs
IPL
Keepalive
Internet
PE/Core 1 PE/Core 2
Remote DC
ECMP
ECMP
ECMP
5
by using physical interfaces assigned to the same VLAN as the servers. As a best practice to
reduce active/standby NIC switchovers upon link flapping, disable active link preemption or set
a preemption delay.
For high availability, make sure the servers are dualhomed to the leaf devices.
A leaf device is connected to upstream devices by using Layer 3 Ethernet interfaces, and
ECMP routes are configured for high availability and load sharing.
Firewall (FW)An internal firewall attached to the DR interfaces on the border devices by
using two aggregate interfaces, one for the uplink and one for the downlink. Static routes are
configured to enable Layer 3 communication between the firewall and border devices.
Load balancer (LB)A load balancer attached to the DR interfaces on the border devices by
using an aggregate interface. Static routes are configured to enable Layer 3 communication
between the load balancer and border devices.
Underlay network planning
HPE offers the following underlay network models:
DRNI at the spine and leaf tiersIf the network is large, set up DR systems at the spine and
leaf tiers, and configure the spine devices as gateways for the servers. For configuration
examples, see Multi-tier DRNI+Spine Gateways+ECMP Paths to External Network
Configuration Example.
DRNI at the leaf tierIf the network is small, set up DR systems at the leaf tier, and configure
the leaf devices as gateways for the servers. Configure ECMP routes between the leaf and
spine tiers.
6
Figure 3 DRNI at the spine and leaf tiers
Server Server
IPL
Leaf 1
Server Server
Leaf 2
Spine
(gateway)
IPL
IPL
Keepalive Keepalive
Keepalive
Internet
PE/Core 1 PE/Core 2
ECMP
7
Figure 4 DRNI at the leaf tier
Restrictions and guidelines for DR system setup
IPL
In addition to protocol packets, the IPL also transmits data packets between the DR member
devices when an uplink fails.
If a DR member device is a modular device, assign at least one port on each slot to the aggregation
group for the IPP as a best practice. This configuration prevents asynchronous service module
reboots from causing IPL flapping after a device reboot. As a best practice, make sure at least one
member port resides on a different slot than the uplink interfaces.
If a DR member device is a fixed-port device with interface expansion modules, assign ports from
multiple interface expansion modules to the aggregation group for the IPP. As a best practice, make
sure at least one member port resides on a different interface expansion module than the uplink
interfaces.
If a DR member device is a fixed-port device, assign at least two physical interfaces to the
aggregation group for the IPP.
Make sure the member ports in the aggregation group for the IPP have the same speed.
If a leaf-tier DR system is attached to a large number of servers whose NICs operate in
active/standby mode, take the size of the traffic sent among those servers into account when you
determine the bandwidth of the IPL.
8
As a best practice to reduce the impact of interface flapping on upper-layer services, use the
link-delay command to configure the same link delay settings on the IPPs. Do not set the link
delay to 0.
To prevent data synchronization failure, you must set the same maximum jumbo frame length on
the IPPs of the DR member devices by using the jumboframe enable command.
Keepalive link
The DR member devices exchange keepalive packets over the keepalive link to detect multi-active
collisions when the IPL is down.
As a best practice, establish a dedicated direct link between two DR member devices as a
keepalive link. Do not use the keepalive link for any other purposes. Make sure the DR member
devices have Layer 2 and Layer 3 connectivity to each other over the keepalive link.
You can use management Ethernet interfaces, Layer 3 Ethernet interfaces, Layer 3 aggregate
interfaces, or interfaces with a VPN instance bound to set up the keepalive link. As a best practice,
do not use VLAN interfaces for keepalive link setup. If you have to use VLAN interfaces, remove the
IPPs from the related VLANs to avoid loops.
If a device has multiple management Ethernet interfaces, you can select one from them to set up a
dedicated keepalive link independent of the management network.
On a modular device or fixed-port device with interface expansion modules, do not use the same
module to provide interfaces for setting up the keepalive link and IPL.
For correct keepalive detection, you must exclude the physical and logical interfaces used for
keepalive detection from the shutdown action by DRNI MAD.
DR interface
DR interfaces in the same DR group must use the different LACP system MAC addresses.
As a best practice, use the undo lacp period command to enable the long LACP timeout timer
(90 seconds) on a DR system.
You must execute the lacp edge-port command on the DR interfaces attached to bare metal
servers.
DRNI MAD
Follow these restrictions and guidelines when you exclude interfaces from the shutdown action by
DRNI MAD on the underlay network:
By default, DRNI MAD shuts down network interfaces after a DR system splits.
You must exclude the VLAN interfaces of the VLANs to which the DR interfaces and IPPs
belong.
For correct keepalive detection, you must exclude the interfaces used for keepalive detection.
Do not exclude the uplink Layer 3 interfaces, VLAN interfaces, or physical interfaces.
When you use EVPN in conjunction with DRNI, follow these restrictions and guidelines:
Set the default DRNI MAD action to NONE by using the drni mad default-action none
command.
Do not configure the DRNI MAD action on the VLAN interfaces of the VLANs to which the DR
interfaces and IPPs belong. These interfaces will not be shut down by DRNI MAD. Use the
drni mad include interface command to include the non-DR interfaces attached to
single-homed servers in the shutdown action by DRNI MAD. These interfaces will be shut
down by DRNI MAD when the DR system splits.
Do not configure the DRNI MAD action on aggregation member ports. These interfaces will be
shut down by DRNI MAD after a DR system splits.
If you use an Ethernet aggregate link as an IPL, add the uplink Layer 3 interfaces, VLAN
interfaces, and physical interfaces to the list of included interfaces by using the drni mad
9
include interface command. These interfaces will be shut down by DRNI MAD. This
restriction does not apply to a VXLAN tunnel IPL.
Do not configure the DRNI MAD action on the interfaces used by EVPN, including the VSI
interfaces, interfaces that provide BGP peer addresses, and interfaces used for setting up the
keepalive link. These interfaces will not be shut down by DRNI MAD.
Do not configure the DRNI MAD action on the interface that provides the IP address specified
by using the evpn drni group command. These interfaces will not be shut down by DRNI
MAD.
When you configure DRNI MAD, use either of the following methods:
To shut down all network interfaces on the secondary DR member device except a few
special-purpose interfaces that must be retained in up state:
Set the default DRNI MAD action to DRNI MAD DOWN by using the drni mad
default-action down command.
Exclude interfaces from being shut down by DRNI MAD by using the drni mad exclude
interface command.
In some scenarios, you must retain a large number of logical interfaces (for example, VSI
interfaces, VLAN interfaces, aggregate interfaces, tunnel interfaces, and loopback interfaces)
in up state. To simplify configuration, you can exclude all logical interfaces from the shutdown
action by DRNI MAD by using the drni mad exclude logical-interfaces command.
To have the secondary DR member device retain a large number of interfaces in up state and
shut down the remaining interfaces:
Set the default DRNI MAD action to NONE by using the drni mad default-action
none command.
Specify network interfaces that must be shut down by DRNI MAD by using the drni mad
include interface command.
If you configure inter-VPN static routes without a next hop in ADDC 6.2 or a later solution, you must
perform the following tasks for the static routes to take effect:
1. Create a service loopback group, and then assign an interface to it.
2. Access the DR system editing page and exclude that interface from the shutdown action by
DRNI MAD.
Restrictions and guidelines
DRNI compatibility with third-party devices
You cannot use DR interfaces for communicating with third-party devices.
DR system configuration
You can assign two member devices to a DR system. For the DR member devices to be identified
as one DR system, you must configure the same DR system MAC address and DR system priority
on them. You must assign different DR system numbers to the DR member devices.
Make sure each DR system uses a unique DR system MAC address.
To ensure correct forwarding, delete DRNI configuration from a DR member device if it leaves its
DR system.
When you bulk shut down physical interfaces on a DR member device for service changes or
hardware replacement, shut down the physical interfaces used for keepalive detection prior to the
physical member ports of the IPP. If you fail to do so, link flapping will occur on the member ports of
DR interfaces.
10
Do not execute the drni drcp period short command to enable the short DRCP timeout timer
when the DRNI process is restarting or before you perform an ISSU. If you do so, traffic forwarding
will be interrupted during the DRNI process restart or ISSU.
DRNI standalone mode
The DR member devices might both operate with the primary role to forward traffic if they have DR
interfaces in up state after the DR system splits. DRNI standalone mode helps avoid traffic
forwarding issues in this multi-active situation by allowing only the member ports in the DR
interfaces on one member device to forward traffic.
The following information describes the operating mechanism of this feature.
The DR member devices change to DRNI standalone mode when they detect that both the IPL and
the keepalive link are down. In addition, the secondary DR member device changes its role to
primary.
In DRNI standalone mode, the LACPDUs sent out of a DR interface by each DR member device
contain the interface-specific LACP system MAC address and LACP system priority.
The Selected state of the member ports in the DR interfaces in a DR group depends on their LACP
system MAC address and LACP system priority. If a DR interface has a lower LACP system priority
value or LACP system MAC address, the member ports in that DR interface become Selected to
forward traffic. If those Selected ports fail, the member ports in the DR interface on the other DR
member device become Selected to forward traffic.
To configure the DR system priority, use the drni system-priority command in system view.
To configure the LACP system priority, use one of the following methods:
Execute the lacp system-mac and lacp system-priority commands in system view.
Execute the port lacp system-mac and port lacp system-priority commands in DR
interface view.
The DR interface-specific configuration takes precedence over the global configuration.
When you configure the DR system priority and LACP system priority, follow these guidelines:
For a single tier of DR system at the leaf layer, set the DR system priority value to be larger
than the LACP system priority value for DR interfaces. The smaller the value, the higher the
priority. For a DR group, configure different LACP system priority values for the member DR
interfaces.
For two tiers of DR systems at the spine and leaf layers, configure the DR system priority
settings of spine devices to be the same as the LACP system priority settings of leaf devices.
This ensures traffic is forwarded along the correct path when a DR system splits.
IPP configuration
To ensure correct Layer 3 forwarding over the IPL, you must execute the undo mac-address
static source-check enable command to disable static source check on the Layer 2
aggregate interface assigned the IPP role. This restriction does not apply to the HPE FlexFabric
12900E switches.
DRNI data restoration interval
The data restoration interval set by using the drni restore-delay command specifies the
maximum amount of time for the secondary DR member device to synchronize forwarding entries
with the primary DR member device during DR system setup. Adjust the data restoration interval
based on the size of forwarding tables. If the DR member devices have small forwarding tables,
reduce this interval. If the forwarding tables are large, increase this interval. Typically, set the data
restoration interval to 300 seconds. If the ARP table of an HPE FlexFabric 12900E switch contains
about 48K entries, set this interval to 900 seconds.
IRF
The HPE FlexFabric 12900E Switch Series (Type K) do not support IRF.
11
DRNI is not supported by an IRF member device, even when the device is the only member in an
IRF fabric. Before you configure DRNI on a device, verify that it is operating in standalone mode.
MDC
Only the HPE FlexFabric 12900E Switch Series (Type X) support MDC.
You cannot use DRNI on MDCs.
GIR
Before you change a DR member device back to normal mode, execute the display drni mad
verbose command to verify that no network interfaces are in DRNI MAD DOWN state.
MAC address table
If the DR system has a large number of MAC address entries, set the MAC aging timer to a higher
value than 20 minutes as a best practice. To set the MAC aging timer, use the mac-address
timer aging command.
The MAC address learning feature is not configurable on the IPP. Do not execute the
mac-address mac-learning enable or undo mac-address mac-learning enable
command on the IPP.
ARP
If a DR interface provides Layer 3 services, a VLAN interface is configured for the VLAN that
contains the DR interface for example, do not configure the following features on the DR interface:
ARP active acknowledgement, configurable with the arp active-ack enable command.
Dynamic ARP learning limit, configurable with the arp max-learning-number command.
This restriction ensures that the DR member devices can learn consistent ARP entries.
Link aggregation
Do not configure automatic link aggregation on a DR system.
The aggregate interfaces in an S-MLAG group cannot be used as DR interfaces or IPPs.
You cannot configure link aggregation management subnets on a DR system.
When you configure a DR interface, follow these restrictions and guidelines:
The link-aggregation selected-port maximum and link-aggregation
selected-port minimum commands do not take effect on a DR interface.
If you execute the display link-aggregation verbose command for a DR interface, the
displayed system ID contains the DR system MAC address and the DR system priority.
If the reference port is a member port of a DR interface, the display link-aggregation
verbose command displays the reference port on both DR member devices.
Port isolation
Do not assign DR interfaces and IPPs to the same port isolation group.
CFD
Do not use the MAC address of a remote MEP for CFD tests on IPPs. These tests cannot work on
IPPs.
Smart Link
The DR member devices in a DR system must have the same Smart Link configuration.
For Smart Link to operate correctly on a DR interface, do not assign the DR interface and non-DR
interfaces to the same smart link group.
Do not assign an IPP to a smart link group.
12
You can use Smart Link on a DR system formed by the following device models:
HPE FlexFabric 5944 switches.
HPE FlexFabric 5945 switches.
HPE FlexFabric 12900E Switch Series.
Mirroring
If you use port mirroring together with DRNI, do not assign the source port, destination port, egress
port, and reflector port for a mirroring group to two aggregation groups. If the source port is in a
different aggregation group than the other ports, mirrored LACPDUs will be transmitted between
aggregation groups and cause aggregate interface flapping.
MAC address synchronization
Two DR member devices synchronize underlay MAC address entries over the IPL and overlay MAC
address entries through BGP EVPN.
Only the MAC address entries learned by hardware age out. Synchronized MAC address entries do
not age out. If a hardware-learned MAC address entry ages out on one DR member device, the
device requests the other DR member device to delete that MAC address entry.
13
DRNI network models
Layer 2 DRNI network models
Loop prevention on a DR system
For a DR system on an underlay network, configure spanning tree to remove loops. For a DR
system on an overlay network, configure VSI-based loop detection to remove loops.
DRNI and spanning tree
Network model
You can use DRNI in conjunction with spanning tree to remove loops, as shown in Figure 5 and
Table 2.
Figure 5 Network diagram
Table 2 Deployment schemes
Scenario
Solution
Commands
Due to a DR system split,
misconnection, or
Enable spanning tree on the DR
member devices. stp global enable (system
view)
STP
Spanning tree
edge port
DR 1 DR 2
DR 3
IPL
DR 4
IPL
Server
Multichassis
aggregate link Inter-DR system
aggregate link
Keepalive
Devices not participating in
spanning tree calculation
Keepalive
14
Scenario
Solution
Commands
misconfiguration, traffic is sent
between two member ports of
the same aggregation group
over the IPL
, which creates a
loop.
Assign the spine-facing interfaces
on leaf devices to different VLANs
if the leaf and spine devices are
interconnected by using VLAN
interfaces in an EVPN distributed
relay
network. In addition, disable
spanning tree on physical
interface
s to remove loops and
prevent the upstream device from
falsely blocking interfaces.
undo stp enable (
Layer 2
Ethernet interface view)
A new device added to the
network preempts the root
bridge role, and network
flapping occurs as a result.
Configure the DR member devices
in the upstream DR system as root
bridges and enable root guard on
them.
stp root primary (system view)
stp root-protection (DR
interface view)
The DR member devices are
attacked by using TC-BPDUs
and flush MAC address entries
frequently, which causes
network flapping, high CPU
usage, and transient floods.
Enable the TC-
BPDU guard
feature on the DR member
devices.
stp tc-protection (system
view)
On a DR member
device, an
interface cannot recognize
BPDUs after its physical state
changes.
Configure an interface as an edge
port if
its peer port does not
support or run spanning tree
protocols.
stp edged-port (DR interface
view)
Network flapping occurs after a
DR member
device receives
forged BPDUs on interfaces
whose counterparts
do not
send BPDUs.
Enable BPDU guard on the DR
member device. When interfaces
with BPDU guard enabled receive
configuration BPDUs, the device
performs the following operations:
Shuts down these interfaces.
Notifies the NMS that these
interface
s have been shut
down by the spanning tree
protocol.
The device re
activates the
interface
s that have been shut
down when the port status
detection timer expires.
stp bpdu-protection (system
view)
Restrictions and guidelines
Make sure the DR member devices in a DR system have the same spanning tree configuration.
Violation of this rule might cause network flapping. The configuration includes:
Global spanning tree configuration.
Spanning tree configuration on the IPP.
Spanning tree configuration on DR interfaces.
IPPs of the DR system do not participate in spanning tree calculation.
The DR member devices still use the DR system MAC address after the DR system splits, which
will cause spanning tree calculation issues. To avoid the issues, enable DRNI standalone mode on
the DR member devices before the DR system splits.
Spanning tree configurations made in system view take effect globally. Spanning tree configurations
made in Layer 2 Ethernet interface view take effect only on the interface. Spanning tree
configurations made in Layer 2 aggregate interface view take effect only on the aggregate interface.
15
Spanning tree configurations made on an aggregation member port can take effect only after the
port is removed from its aggregation group.
After you enable a spanning tree protocol on a Layer 2 aggregate interface, the system performs
spanning tree calculation on the Layer 2 aggregate interface. It does not perform spanning tree
calculation on the aggregation member ports. The spanning tree protocol state and forwarding state
of each selected member port are consistent with those of the corresponding Layer 2 aggregate
interface. The member ports of an aggregation group do not participate in spanning tree calculation.
However, the ports still reserve their spanning tree configurations for participating in spanning tree
calculation after leaving the aggregation group.
DRNI and VSI-based loop detection
Mechanisms
As shown in Figure 6, if an endpoint is dualhomed to the DR system, you must enable loop
detection on both VTEPs in the DR system. Loop detection works as follows on the VTEPs:
1. The VTEPs send loop detection frames out of the ACs configured on the DR interfaces facing
the endpoint. The loop detection frames contain the same source MAC address, VLAN tag,
loop detection interval, and loop detection priority. The source MAC address is the DR system
MAC address.
2. When receiving loop detection frames on a local DR interface, a VTEP sends the loop
detection frames to the peer VTEP over the IPL. This synchronization mechanism ensures that
a VTEP can receive loop detection frames in case of link or interface failure.
3. If a VTEP receives a self-sent loop detection frame from an AC, the VTEP compares the loop
detection priority of the AC with that in the frame and acts as follows:
If the loop detection priority in the frame is higher, the VTEP performs the loop protection
action on all ACs configured for the DR group that accommodates the looped DR interface.
If the loop detection priority of the AC is higher, the system only records the loop
information.
If an endpoint is singlehomed to one VTEP in the DR system, enable loop detection only on that
VTEP. Loop detection works as follows on the VTEP:
4. The VTEP sends loop detection frames out of the ACs configured on the DR interface facing
the endpoint. The source MAC address is the DR system MAC address.
5. If the VTEP receives a self-sent loop detection frame from an AC, the VTEP compares the
loop detection priority of the AC with that in the frame and acts as follows:
If the loop detection priority in the frame is higher, the VTEP performs the loop protection
action on the looped AC.
If the loop detection priority of the AC is higher, the system only records the loop
information.
16
Figure 6 Loop detection in a VXLAN network with DRNI configured
Compatibility of data center switches with VSI-based loop detection
Hardware
Software
Reference
HPE FlexFabric 12900E
Switch Series (Type K) R5210 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 12900E Switch Series
Configuration Guides-R52xx.
HPE FlexFabric 12900E
Switch Series (Type X) R7624P08 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 12900E Switch Series
Configuration Guides-R762X.
HPE FlexFabric 5944 &
5945 Switch Series R6710 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 5944 & 5945
Configuration Guides-Release 671x.
HPE FlexFabric 5940
Switch Series R6710 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 5940 Configuration
Guides-Release 671x.
HPE FlexFabric 5710
Switch Series R6710 and later
See VXLAN loop detection in Layer
2—LAN Switching Configuration Guide in
HPE FlexFabric 5710 Configuration
Guides-Release 671x.
Restrictions and guidelines
If you enable loop detection in an EVPN VXLAN DRNI dualhoming environment, configure the
same loop detection parameters for the VTEPs in the DR system.
L2 Network
L2 Network
VTEP
VTEP
VTEP
IPL
Agg1
Agg1
Server 3 Server 4
Server 1 Server 2
Site 1 Site 2
Site 3 Site 4
Transport network
17
Layer 3 DRNI network models
Gateway deployment schemes
Table 3 shows the schemes to configure gateways on a DR system for attached servers.
Table 3 Gateway deployment schemes for DRNI
Gateway type
Description
VLAN interface (recommended)
A VLAN interface is configured on each DR member device,
and both DR member devices can respond to ARP packets and
perform Layer 3 forwarding.
Attached servers require Layer 3 connectivity to the DR system
in some scenarios, containers are deployed on the servers for
example. To fulfil this requirement, perform one of the following
tasks:
Configure static routes.
Assign a virtual IPv4 or IPv6 address to each gateway VLAN
interface by using the port drni virtual-ip or port
drni ipv6
virtual-ip
command.
VRRP group
Both the VRRP master and backup devices perform Layer 3
forwarding, but only the master device responds to ARP
packets.
In a VRRP dual-
active scenario, a gateway locally forwards a
packet at Layer 3 if the packet is destined for the VRRP virtual
MAC
address, real MAC address of the local device, or real
MAC address of the DR peer. The DR member devices
synchronize the real MAC addresses of the gateways with each
other.
The server-side devices can set up dynamic routing neighbor
relationships with the DR member devices.
For more information about support for dual-active VLAN interfaces, see the applicable product
matrix in DRNI+IPv4 and IPv6 Dual-Active VLAN Gateway Configuration Example.
Dual-active VLAN interfaces
About dual-active VLAN interfaces
Configure VLAN interfaces as gateways on both DR member devices, as shown in Figure 7 and
Table 4.
For more information about configuring dual-active VLAN interfaces, see DRNI+IPv4 and IPv6
Dual-Active VLAN Gateway Configuration Example.
18
Figure 7 Network diagram
Table 4 Configuration tasks
Tasks
Forwarding
VLAN interface configuration:
a. Create a gateway VLAN interface on each DR
member device for the same VLAN.
b. Assign the same IP address and MAC address
to the gateway VLAN interfaces.
c. Create a VLAN interface on each DR member
device for another VLAN, assign the IPPs to
this
VLAN, and assign a unique IP address
from the same subnet to each of the VLAN
interfaces.
The DR member
devices use those VLAN
interfaces to forward traffic between them
when a link to the upstream device is failed.
Use Layer 3 interfaces to connect the DR member
devices to the upstream device, and configure
ECMP routes for load sharing across the uplinks.
Configure static routes for reaching the attached
servers if the servers accommodate containers
and the DRNI
virtual IP address feature is not
supported.
For Layer 2 traffic sent by the servers, the
DR member devices
look up the MAC
address table and forward the traffic locally.
For Layer 3 traffic sent by the servers, the
DR member devices
perform Layer 3
forwarding based on the FIB table.
For external traffic destined for the servers,
the DR member devices perform forwarding
based on the routing table.
Network models for deploying dual-active VLAN interfaces on DR systems at multiple layers
with core devices as gateways
As shown in Figure 8, DR systems are set up at three layers to avoid single points of failure:
Device A and Device B form a DR system at the access layer. Device C and Device D form a
DR system at the distribution layer. Device E and Device F form a DR system at the core layer.
The server is dualhomed to the access DR system. The VM is dualhomed to the core DR
system via Device G.
Dual-active VLAN interfaces are configured on the core DR system to offer gateway and
routing services.
Vlan-int 101
101.1.1.2/24
Device A
DR 1 DR 2
Keepalive
IPL
Dual-active VLAN
interfaces
VLAN 100
Vlan-int 100
IP: 100.1.1.100/24
MAC: 0000-0010-0010
32.1.1.1/24 33.1.1.1/24
OSPF/BGP
32.1.1.2/24 33.1.1.2/24
Vlan-int 101
101.1.1.1/24
Vlan-int 100
100.1.1.100/24
MAC: 0000-0010-0010
Server
Static routing
19
Spanning tree is configured on Device A through Device F, and Device E and Device F are
configured as root bridges.
Figure 8 Three-layer model
As shown in Figure 9, DR systems are set up at the access and distribution layers to avoid single
points of failure.
DRNI deployment does not differ greatly between a three-layer network and a two-layer network.
For more configuration details, see Multi-Layer DRNI+STP+Dual-Active VLAN Gateway
Configuration Examples.
Configure
spanning
tree
Spanning tree
edge port
Device E Device F
Device C Device D
Device A
Keepalive
Device B
Keepalive
IPL
Access layer
Distribution
layer
Core layer
Multichassis
aggregate link
Keepalive
IPL
IPL
Device G
Layer 3 gateway
Vlan-int 100
IPv4: 100.1.1.100/24
IPv6: 100::100/96
MAC: 0000-0010-0010
Vlan-int 100
IPv4: 100.1.1.100/24
IPv6: 100::100/96
MAC: 0000-0010-0010
VM
Server
Configure
spanning
tree
DR
DR
DR
DR
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52

Aruba R9F19A Configuration Guide

Category
Network switches
Type
Configuration Guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI