Aruba JL853AAE User guide

Category
Networking
Type
User guide
IMC Orchestrator 6.3 Solution
Maintenance Guide
The information in this document is subject to change without notice.
© Copyright 2023 Hewlett Packard Enterprise Development LP
i
Contents
Troubleshooting Layer 2 forwarding failure in an EVPN overlay network ······1
Issue ········································································································································· 1
Troubleshooting prerequisites········································································································· 1
Troubleshooting flowchart and summary ·························································································· 1
Solution ······································································································································ 4
Troubleshooting Layer 3 forwarding failure on EVPN distributed gateways · 19
Issue ······································································································································· 19
Troubleshooting flowchart and summary ························································································ 19
Solution ···································································································································· 22
Troubleshooting underlay switch automatic deployment failure in the EVPN
distributed gateway scenario····························································· 54
Issue ······································································································································· 54
Troubleshooting prerequisites······································································································· 54
Troubleshooting flowchart and summary ························································································ 54
Solution ···································································································································· 56
Troubleshooting the failure of VMs to come online in an IPv4 network-based
overlay environment ······································································· 65
Issue ······································································································································· 65
Troubleshooting prerequisites······································································································· 65
Troubleshooting flowchart and summary ························································································ 65
Solution ···································································································································· 68
Troubleshooting the failure of VMs to come online in an IPv6 network-based
overlay environment ······································································· 89
Issue ······································································································································· 89
Troubleshooting prerequisites······································································································· 89
Troubleshooting flowchart and summary ························································································ 89
Solution ···································································································································· 92
Troubleshooting forwarding failure in an EVPN multicast network ··········· 109
Issue ····································································································································· 109
Troubleshooting prerequisites····································································································· 109
Troubleshooting flowchart and summary ······················································································ 109
Solution ·································································································································· 111
Appendix A Collecting logs ···························································· 115
Collecting operation logs on IMC Orchestrator ··············································································· 115
Collecting system logs on IMC Orchestrator ·················································································· 115
Collecting running logs on IMC Orchestrator ················································································· 116
Collecting logs for the Neutron plug-in on the cloud platform ···························································· 117
Collecting switch/firewall logs ····································································································· 118
Diagnostic information ········································································································ 118
Logfile information ············································································································· 119
Collecting license server logs ····································································································· 119
1
Troubleshooting Layer 2 forwarding
failure in an EVPN overlay network
Issue
EVPN overlay networking is usually used in IMC Orchestrator series solutions, which involve IMC
Orchestrator and hardware switches. Compared with the overlay networking that uses the
centralized control mode, the EVPN networking uses the extended BGP to construct the overlay
control plane.
This issue is that two vPorts cannot successfully forward Layer 2 traffic (forward traffic in the same
network segment) to each other.
Troubleshooting prerequisites
The IMC Orchestrator controller can normally manage the leaf switches.
The two vPorts for overlay Layer 2 forwarding have been up on the IMC Orchestrator controller
The VMs or the network adapters of the physical devices where the vPorts are located have learned
the ARP entries for the addresses to be communicated with.
Troubleshooting flowchart and summary
Use the flowchart in Figure 1 to troubleshoot the issue.
2
Figure 1 Troubleshooting Layer 2 forwarding failure in EVPN overlay network
Start
View that vPort comes
online normally
View that the
VXLAN-related
configuration
is correct
View
that the VXLAN tunnel
configuration is
generated
VTEP IP
is reachable
Determine
whether to adopt
forwarding in the ATRP
proxy mode
Check that
the ARP table entry
is normal
View that
the host routing
table entry is normal
Check that
the vPort is configured with
a security policy that
affects communication
Check
that the intermediate
devices along the traffic
loses packets
Contact Technical Support for help
Sort through the causes of
vPort online failure
Configure VXLAN manually or
through VCFC
View
that the neighbor state
of VTEP BGP
is normal
Check the configurations of
underlay links, routes and BGP
Check the loopback
configuration
Check
that the ARP table entry
is normal
Are the AC interface
configuration and MAC address
aging?
View the
ARP suppression
table entry is normal
Check the ARP suppression
configuration Are the AC interface
configuration and ARP aging?
Check the configuration related
to VSI, VPN, and tunnel
Delete the security policy or
configure the security policy rule to
permit traffic
Sort through the causes of
packet loss on the intermediate
link
indicates that the step of the previous result is normal
indicates that the step of the previous result is abnormal
To resolve this issue:
1. View the vPort list on the Web interface of the IMC Orchestrator controller to verify that the two
vPorts that cannot communicate come online normally.
ï‚¡ If either vPort does not come online, see "Troubleshooting the failure of VMs to come online
in an IPv4 network-based overlay environment" to troubleshoot the vPort offline failure.
ï‚¡ If both vPorts come online normally, go to step 2.
3
2. Verify that correct VXLAN-related configurations exist on the leaf switch.
ï‚¡ If the leaf switch does not have correct VXLAN or VSI configurations, check the
configurations on the IMC Orchestrator controller or manually modify the configurations on
the leaf switch.
ï‚¡ If correct VXLAN and VSI tunnel configurations exist but forwarding still fails, go to step 3.
3. Verify that the correct VXLAN tunnel configuration exists on the leaf switch.
ï‚¡ In an EVPN network, the tunnel configuration is automatically generated by the hardware
switch based on the received EVPN BGP type-3 routes. If no tunnel-related configuration
exists, go to step 5 to troubleshoot the establishment of EVPN BGP neighbors.
ï‚¡ If the tunnel-related configuration has been generated, go to step 4.
4. Verify that the source and destination VTEP IP addresses used to establish the VXLAN tunnel
between leaf switches can communicate with each other normally.
ï‚¡ If the VTEP IP addresses are unreachable, go to step 5 to check the establishment of EVPN
BGP neighbors.
ï‚¡ If the VTEP IP addresses are reachable, go to step 6.
5. Check the EVPN BGP neighbor state between the leaf switch and the spine BGP route reflector.
If the neighbor state is abnormal, check the BGP configuration and the underlay links and
routes. If the neighbor state is normal, check the loopback interface configuration of the leaf
switch. After the VTEP IP addresses are reachable, go to step 6.
6. Check whether the forwarding mode of the leaf switch is local proxy ARP or ARP flood
suppression, which can be confirmed by using CLI on the switch. In the case of ARP flood
suppression, go to step 7. In the case of non-ARP flood suppression, go to step 9.
7. Verify that the MAC address entry for the faulty VM is established on the leaf switch. In ARP
flood suppression mode, the leaf switch needs to query the MAC address table to forward Layer
2 traffic. If no matching MAC address entry exists, verify that the corresponding AC interface
configuration exists, and that the MAC address is not aged. After confirming that the MAC
address entry exists, go to step 8.
8. Verify that the ARP suppression entry of the faulty VM is established on the leaf switch. When
the leaf switch replies to the ARP request on behalf of the VM, it queries the ARP suppression
table. If no matching entry exists, verify that the ARP suppression configuration and the AC
interface configuration exist on the switch and the IMC Orchestrator controller. After the ARP
suppression entry exists, go to step 11.
9. Verify that the ARP entry for the faulty VM is established on the leaf switch. When the leaf
switch replies to the ARP request on behalf of the VM, it queries the ARP table. If no matching
entry exists, verify that the AC interface configuration exists and that the ARP entry is not aged
on the leaf switch. After confirming that the ARP entry exists, go to step 10.
10. Verify that the host routing entry of the faulty VM is established on the leaf switch. In the local
proxy ARP mode, the leaf switch needs to query the host routing table to forward Layer 2 traffic.
If no matching table entry exists, verify that the VSI, VPN instance, L3VNI and other related
configurations on the switch are correct, and that a correct tunnel is mapped to the VSI. After
confirming that the host routing entry of the VM exists, go to step 11.
11. On the IMC Orchestrator controller, check security policy configuration for the two vPorts that
need to communicate. If the vPorts are configured with a security policy, make sure the security
policy permits the source and destination addresses of the vPorts, or remove the security policy
configuration. If the vPorts are not configured with a security policy, or after confirming that the
security policy permits the vPorts' traffic, go to step 12.
12. Examine other devices (network adapters of servers or network devices) that the traffic passes
through along the forwarding path. Locate where the traffic is lost, and sort through the possible
causes of packet loss on the intermediate links.
13. If the issue persists, contact Technical Support for help.
4
Solution
Verifying that the states of the vPorts that cannot communicate are normal
1. View the vPort list on the Web interface of the IMC Orchestrator controller to verify that the two
vPorts that cannot communicate come online normally.
ï‚¡ If either vPort does not come online, see "Troubleshooting the failure of VMs to come online
in an IPv4 network-based overlay environment" to troubleshoot the vPort offline failure.
ï‚¡ If both vPorts come online normally, go to "Verifying that the correct VXLAN-related
configuration exists on the leaf switch."
2. Select Virtual Port > vPorts on the Web interface of IMC Orchestrator.
Verify that the two vPorts that cannot communicate exist and that their states are UP. As shown
in the following figure, the vPort in the Down state is abnormal and not online. The vPort in the
UP state is in the normal state and already online.
Figure 2 Checking the states of the vPorts that cannot communicate
3. Verify that the MAC address, IP address, host IP address, and VTEP address of the online
vPort are the same as the real values. Any inconsistent value indicates that the vPort is not
online. If all of them are the same, the vPort is considered online.
4. When verifying that the vPort comes online abnormally, see "Troubleshooting the failure of VMs
to come online in an IPv4 network-based overlay environment" for troubleshooting.
5. If the vPort information is found the same as the actual values, go to "Verifying that the correct
VXLAN-related configuration exists on the leaf switch."
Verifying that the correct VXLAN-related configuration exists on the leaf switch
Verify that the correct VXLAN related configuration exists on the leaf switch: If no correct VXLAN and
VSI configurations exist, check the configuration of IMC Orchestrator or manually modify the
configuration. If correct VXLAN and VSI tunnel configurations exist but forwarding still fails, go to
Verifying that the correct VXLAN tunnel-related configuration exists on the leaf switch.
1. Verify that the correct VXLAN-related configuration exists on the leaf switch.
a. Log in to the command line interface of the leaf switch and execute the display
current-configuration command to check that the correct VXLAN-related
configuration exists on the leaf switch, including both L2VPN function and VTEP function
enabled, and VSI configuration. As shown below in the shaded fields, the device has
enabled the L2VPN function and VTEP function, the VSI name is SDN_VSI_101, the
VXLAN ID is VXLAN 101, and the encapsulation mode is EVPN.
[user1]display current-configuration
#
l2vpn enable
vxlan tunnel arp-learning disable
5
#
vsi SDN_VSI_101
gateway vsi-interface 2
statistics enable
arp suppression enable
vxlan 101
evpn encapsulation vxlan
route-distinguisher auto
vpn-target auto export-extcommunity
vpn-target auto import-extcommunity
#
vtep enable
b. If some of the above configurations do not exist, further check the corresponding
configuration on the IMC Orchestrator controller (if the configuration of the leaf switch is
deployed by IMC Orchestrator), or manually modify the configuration on the leaf switch.
2. Check the corresponding configuration on IMC Orchestrator.
a. For the VXLAN-related configuration of Layer 2 forwarding in EVPN networking, the key
configuration corresponding to IMC Orchestrator is the vNetwork configuration and the
VLAN-VXLAN mapping configuration. Log in to the Web interface of IMC Orchestrator.
Select Automation > Data Center Networks > Tenant Network > Virtual Network. You
can see the relevant network configuration. Configure the correct Segment ID (that is,
VXLAN ID). In this example, the vNetwork name is Net1, and its VXLAN ID is 101.
Log in to the Web interface of IMC Orchestrator. Select Automation > Data Center
Networks > Resource Pools > VNID Pools > VLAN-VXLAN Mappings. Click Ranges in
the Mapping Rules column of the corresponding mapping table. You can see that the
VLAN 100-109 are mapped to VXLAN 100-109. The VXLAN range here must include the
VXLAN ID of the previously configured vNetwork. As shown in the following figure:
b. Click Apply to Interfaces to check that the mapping table has been bound to the
corresponding AC access interface of the leaf switch.
Verifying that the correct VXLAN tunnel-related configuration exists on the leaf switch
Verify that the correct VXLAN tunnel-related configuration exists on the leaf switch: In EVPN
networking, the tunnel configuration is automatically generated by the hardware switch based on the
received EVPN BGP type-3 routes. If no tunnel-related configuration exists, see "Checking the
6
EVPN BGP neighbor state between the leaf switch and the spine BGP route reflector" to
troubleshoot the establishment of EVPN BGP neighbors. If the tunnel-related configuration has been
generated, see "Verifying that the source and destination VTEP addresses used to establish the
VXLAN tunnel between leaf switches can communicate normally."
Log in to the command line interface of the leaf switch and execute the display interface
tunnel command to check that the correct VXLAN tunnel-related configuration exists on the leaf
switch, and check that the tunnel is established. As shown below in the shaded fields, the VXLAN
tunnel of the device is Tunnel0, the source IP address is 110.1.1.2, and the destination IP address is
110.1.1.1.
[user]display interface Tunnel
Tunnel0
Current state: UP
Line protocol state: UP
Description: Tunnel0 Interface
Bandwidth: 64 kbps
Maximum transmission unit: 1464
Internet protocol processing: Disabled
Last clearing of counters: Never
Tunnel source 110.1.1.2, destination 110.1.1.1
Tunnel protocol/transport UDP_VXLAN/IP
Last 300 seconds input rate: 0 bytes/sec, 0 bits/sec, 0 packets/sec
Last 300 seconds output rate: 0 bytes/sec, 0 bits/sec, 0 packets/sec
Input: 0 packets, 0 bytes, 0 drops
Verifying that the source and destination VTEP addresses used to establish the VXLAN
tunnel between leaf switches can communicate normally
Verify that the source and destination VTEP IP addresses used to establish the VXLAN tunnel
between leaf switches can communicate normally: If the VTEP IP addresses are unreachable, go to
Checking the EVPN BGP neighbor state between the leaf switch and the spine BGP route reflector to
check the establishment of EVPN BGP neighbors. If the VTEP IP addresses are reachable, go to
Checking the forwarding mode of the leaf switch (local proxy ARP or ARP flood suppression).
Log in to the command line interface of the leaf switch and execute the ping -a Source VTEP IP
address Destination VTEP IP address command to check that the source and destination
VTEP addresses used to establish the VXLAN tunnel can communicate normally. As shown below in
the shaded fields, the source IP address of the VXLAN tunnel is 110.1.1.2, the destination IP address
is 110.1.1.1, and 0.0% packet loss indicates that no packet loss occurs between the source and
destination VTEP IP addresses, the communication is normal and the two leaf switches are
reachable. Otherwise, if the number before the packet loss field is not 0.0%, the communication
between VTEPs is abnormal. Further check the EVPN BGP neighbor state between the leaf switch
and the spine BGP route reflector.
[user] ping -a 110.1.1.2 110.1.1.1
Ping 110.1.1.1 (110.1.1.1) from 110.1.1.2: 56 data bytes, press CTRL_C to break
56 bytes from 110.1.1.1: icmp_seq=0 ttl=255 time=3.307 ms
56 bytes from 110.1.1.1: icmp_seq=1 ttl=255 time=3.033 ms
56 bytes from 110.1.1.1: icmp_seq=2 ttl=255 time=2.828 ms
56 bytes from 110.1.1.1: icmp_seq=3 ttl=255 time=3.042 ms
56 bytes from 110.1.1.1: icmp_seq=4 ttl=255 time=3.100 ms
--- Ping statistics for 100.1.1.2 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 2.828/3.062/3.307/0.153 ms
7
Checking the EVPN BGP neighbor state between the leaf switch and the spine BGP route
reflector
Verify that the EVPN BGP neighbor is established between the leaf switch and the spine BGP route
reflector. If the neighbor state is abnormal, further check the BGP configuration and underlay links
and routes. If the neighbor state is normal, check the loopback interface configuration of the leaf
switch. After ensuring that the VTEP IP addresses are reachable, go to Checking the forwarding
mode of the leaf switch (local proxy ARP or ARP flood suppression).
Log in to the command line interface of the leaf switch and execute the display bgp peer l2vpn
evpn command to check the EVPN BGP neighbor state between the leaf switch and the spine BGP
route reflector. As shown below in the shaded fields, the EVPN BGP neighbor state between the leaf
device and the spine RR route reflector with the IP address of 110.1.1.1 is Established and the
establishment is normal. Otherwise, if the state in the State column is not Established, the EVPN
BGP neighbor state is abnormal.
[user] display bgp peer l2vpn evpn
BGP local router ID: 110.1.1.2
Local AS number: 1000
Total number of peers: 1 Peers in established state: 1
* - Dynamically created peer
Peer AS MsgRcvd MsgSent OutQ PrefRcv Up/Down State
110.1.1.1 1000 94 87 0 1 01:07:20 Established
If the EVPN BGP neighbor state is abnormal, first check that the IP addresses used to establish the
EVPN BGP neighbor between the leaf switch and the spine BGP route reflector are underlay
reachable. First, check the connectivity of the underlay links. Then, log in to the leaf switch command
line interface, and execute the display ip routing-table XXXX (where XXXX is the IP address
used by the spine device to establish an EVPN BGP neighbor) to check that there are interoperable
underlay network route between the leaf switch and the spine BGP route reflector. As shown below
in the shaded fields, an underlay route exists between the leaf device and the spine RR route
reflector with the IP address of 110.1.1.1, the next hop is 11.11.11.2, and the outbound interface is
XGE1/0/48. If no underlay route or next hop exists or outbound interface is wrong, check the
underlay network first. If the underlay route is correct, further check the EVPN BGP configuration.
[user]display ip routing-table 110.1.1.1
Summary count : 1
Destination/Mask Proto Pre Cost NextHop Interface
110.1.1.1/32 O_INTRA 10 1 11.11.11.2 XGE1/0/48
Log in to the command line interfaces of the leaf switch and the spine switch respectively, and
execute the display current-configuration | begin bgp command to check that the EVPN
BGP configurations of the leaf switch and spine BGP are correct.
The key BGP configuration of the leaf switch is shown in shaded fields. In this example, the BGP AS
number is 1000, the address of the peer spine switch of the leaf switch is 110.1.1.1, and the AS
number is also 1000:
[user] display current-configuration | begin bgp
bgp 1000
peer 110.1.1.1 as-number 1000
peer 110.1.1.1 connect-interface LoopBack1
#
address-family ipv4 unicast
peer 123.1.1.1 enable
#
address-family l2vpn evpn
peer 110.1.1.1 enable
8
The key BGP configuration of the spine switch is shown in shaded fields. In this example, the BGP
AS number is 1000, the address of the peer leaf switch of the spine switch is 110.1.1.2, and the AS
number is also 1000. Different from the leaf switch, the BGP configuration of the spine switch has an
additional peer 110.1.1.2 reflect-client, and the leaf switch is set as the route reflector client:
[user]display current-configuration | begin bgp
bgp 1000
peer 110.1.1.2 as-number 1000
peer 110.1.1.2 connect-interface LoopBack1
#
address-family ipv4 unicast
peer 3.1.1.1 enable
peer 3.1.1.1 next-hop-local
#
address-family l2vpn evpn
peer 110.1.1.2 enable
peer 110.1.1.2 reflect-client
After checking the EVPN BGP configuration, check the configuration of the loopback interface where
the source and destination VTEP addresses of the VXLAN tunnel are established. If the loopback
interface address is configured incorrectly or the underlay routing protocol is not enabled, the VTEP
address may be blocked. Log in to the command line interfaces of the leaf switch and spine switch
respectively, and execute the display current-configuration interface loopback X (X
is the loopback interface number) command to check the loopback interface configuration. The key
configuration is shown in the shaded fields below. Combined with the previously observed source
and destination VTEP IP addresses configured for the VXLAN tunnel, confirm that the loopback
interface address configuration is correct:
[user] display current-configuration interface LoopBack 1
#
interface LoopBack1
ip address 110.1.1.2 255.255.255.255
ospf 1 area 0.0.0.0
#
Checking the forwarding mode of the leaf switch (local proxy ARP or ARP flood suppression)
Verify that the forwarding mode of the leaf switch is local proxy ARP or ARP flood suppression, which
can be confirmed through the command line configuration on the hardware switch. In the case of
ARP flood suppression, go to Verifying that the MAC address entry of the faulty VM is established on
the leaf switch. In the case of non-ARP flood suppression, go to Verifying that the ARP entry of the
faulty VM is established on the leaf switch.
Log in to the command line interface of the leaf switch and execute the display
current-configuration | include arp command to check the forwarding mode of the leaf
switch. As shown below, the shaded field arp suppression enable indicates that the forwarding mode
of the leaf device is ARP flood suppression. If the shaded field local-proxy-arp enable appears, the
forwarding mode of the leaf device is local proxy ARP.
ARP flood suppression:
[user] display current-configuration | include arp
vxlan tunnel arp-learning disable
arp suppression enable
Local proxy ARP:
[user] display current-configuration | include arp
vxlan tunnel arp-learning disable
local-proxy-arp enable
9
Verifying that the MAC address entry of the faulty VM is established on the leaf switch
Verify that the MAC address table entry of the faulty VM is established on the leaf switch.
In the case of local proxy ARP, the leaf switch shall query the MAC address table entry to forward
Layer 2 traffic. If no matching entry exists, verify that the corresponding AC interface configuration
still exists and the MAC address is not aged. After confirming that the MAC address entry exists, go
to Verifying that the ARP suppression entry of the faulty VM is established on the leaf switch.
1. Verify that the AC interface configuration is correct.
Log in to the command line interface of the leaf switch, and execute the display
current-configuration interface XXX (where XXX is the name of the uplink AC
interface of the faulty VM) to check the AC interface configuration of the leaf switch. As shown
below, for example, the AC access interface on the leaf switch is Ten-GigabitEthernet1/2/5,
when vtep access port is enabled, the shaded fields indicate that the AC interface configuration
can convert the packet with the outer VLAN TAG of 101 into the VXLAN instance
SDN_VSI_101:
[user]display current-configuration interface Ten-GigabitEthernet 1/2/5
#
interface Ten-GigabitEthernet1/2/5
port link-mode bridge
port link-type trunk
port trunk permit vlan 1 101
vtep access port
#
service-instance 101
encapsulation s-vid 101
xconnect vsi SDN_VSI_101
If the vtep access port configuration does not exist, manually configure it on the leaf switch. If
the configuration under service-instance does not exist, in case of the leaf switch forwarding is
pre-configured, check the relevant configuration on IMC Orchestrator (see Verifying that the
correct VXLAN-related configuration exists on the leaf switch). In case of the leaf switch
forwarding is not pre-configured, verify that the vPort is normal (see Verifying that the states of
the vPorts that cannot communicate are normal).
Determine that the switch is forwarding pre-configuration: Log in to the IMC Orchestrator page,
find the leaf switch by selecting Resource Pools > Devices > Physical Devices, and click the
Edit button in the Actions column to enter the VXLAN page. If VXLAN Service Preconfiguration
is set to Yes, the switch is in forwarding preconfiguration mode, and the AC interface and
VXLAN related configuration can be deployed normally without the vPort being online.
Otherwise, the switch is in non-preconfiguration mode, and the AC interface and VXLAN related
configuration can be deployed normally only after the vPorts come online.
2. Verify that the MAC address entry for the VM is not aged.
Log in to the command line interface of the leaf switch and execute the display l2vpn
mac-address command to check that the MAC address of the faulty VM still exists on the leaf
switch. As shown below, for the VM corresponding to VSI SDN_VSI_101 on the leaf switch, the
shaded field indicates that the VM with the MAC address 0050-5683-115f is currently not aging
on the switch:
[user] display l2vpn mac-address
10
MAC Address State VSI Name Link ID/Name Aging
0050-5683-115f Dynamic SDN_VSI_101 XGE1/2/5 Aging
--- 1 mac address(es) found ---
If the VM MAC address table entry cannot be observed normally through the above steps, you
need to actively ping the gateway on the VM so that the leaf switch can learn the MAC address
of the VM again after receiving the packet sent by the VM.
Verifying that the ARP suppression entry of the faulty VM is established on the leaf switch
Verify that the ARP suppression entry of the faulty VM is established on the leaf switch. When the
leaf switch replies to the ARP request on behalf of the VM, it queries the ARP suppression table. If no
corresponding table entry exists, verify that the ARP suppression configuration on the switch and
IMC Orchestrator, and the AC interface configuration exist. After troubleshooting and confirming that
the ARP suppression table entry exists, go to Verifying the security policy configuration on the IMC
Orchestrator controller for the two vPorts that need to communicate.
1. Verify that the ARP suppression entry is established.
Log in to the command line interface of the leaf switch and execute the display arp
suppression vsi command to check the ARP suppression table entry on the leaf switch: In
the case of ARP flood suppression, the ARP request of the VM is replied to by the leaf switch
according to the ARP suppression table. As shown below, for the VM corresponding to VSI
SDN_VSI_101 on the leaf switch, the shaded field indicates that the VM with the MAC address
0050-5683-115f has successfully established the ARP suppression table entry:
[user] display arp suppression vsi
IP address MAC address Vsi Name Link ID Aging
1.1.1.2 0050-5683-115f SDN_VSI_101 0x0 23
If the VM's ARP suppression entry cannot be observed through the above steps, first check the
AC interface configuration on the leaf switch (Verifying that the MAC address entry of the faulty
VM is established on the leaf switch). The aging time of ARP suppression table entries is 25
minutes, which cannot be modified. To prevent the VM ARP suppression table entry on the leaf
switch from aging, you can actively ping the gateway on the VM so that the leaf switch can learn
the ARP information of the VM again after receiving the ARP request from the VM.
2. Check the ARP suppression configuration.
If the leaf switch still cannot learn the ARP suppression entry of the VM, check the ARP
suppression configuration. First, enter the VSI view, and then execute the display this
command to check that ARP suppression is configured in the VSI view. If the shaded field arp
suppression enable displays, the ARP suppression configuration is normal.
[user] vsi SDN_VSI_101
[user-vsi-SDN_VSI_101] display this
#
vsi SDN_VSI_101
gateway vsi-interface 2
statistics enable
arp suppression enable
vxlan 123
evpn encapsulation vxlan
route-distinguisher auto
vpn-target auto export-extcommunity
vpn-target auto import-extcommunity
#
return
If the ARP suppression configuration is not normal, log in to the IMC Orchestrator page. Select
Automation > Data Center Networks > Fabrics > Fabrics. Select the fabric you are using,
which is fabric1 in this example. Click the Edit button on the right, and check that Reply By
11
Device is selected for the ARP Protocol column in the Settings page. If it is not selected, IMC
Orchestrator will not deploy the ARP suppression configuration to the leaf switch.
Verifying that the ARP entry of the faulty VM is established on the leaf switch
Verify that the ARP table entry of the faulty VM is established on the leaf switch. When the leaf switch
replies to the ARP request on behalf of the VM, it queries the ARP table. If no matching entry exists,
verify that the AC interface configuration exists and that the ARP entry is not aged on the leaf switch.
After confirming that the ARP entry exists, go to Verifying that the host table entry of the faulty VM is
established on the leaf switch.
Log in to the command line interface of the leaf switch and execute the display arp command to
check the ARP table entry on the leaf switch. As shown below, for the leaf switch, the shaded field
indicates that the VM with the MAC address of 0050-5683-115f has successfully established the
ARP table entry:
[user] display arp
Type: S-Static D-Dynamic O-Openflow R-Rule M-Multiport I-Invalid
IP address MAC address VID Interface/Link ID Aging Type
11.11.11.2 50da-00f1-e9a3 N/A XGE1/0/48 609 D
1.1.1.2 0050-5683-115f 1 XGE1/2/5 696 D
77.77.77.254 6805-ca21-d6e5 2080 XGE1/0/1 1019 D
If the VM ARP table entry cannot be observed through the above steps, first check the AC interface
configuration on the leaf switch (see Verifying that the MAC address entry of the faulty VM is
established on the leaf switch). To prevent the VM's ARP entry on the leaf switch from aging, you can
actively ping the gateway on the VM so that the leaf switch can learn the ARP information of the VM
again after receiving the ARP request from the VM.
Verifying that the host table entry of the faulty VM is established on the leaf switch
Verify that the host routing table entry of the faulty VM is established on the leaf switch. In the case of
local proxy ARP, the leaf switch shall query the host routing table entry to forward Layer 2 traffic. If no
corresponding table entry exists, verify that the VSI, VPN instance, L3VNI, and other related
configurations on the leaf switch are correct and that a correct tunnel is mapped to the VSI. After
confirming that the host routing table entry of the VM exists, go to Verifying the security policy
configuration on the IMC Orchestrator controller for the two vPorts that need to communicate.
1. Verify that the host routing table entry is established.
Log in to the command line interface of the leaf switch, and execute the display ip
routing-table vpn-instance XXX (where XXX is the VPN instance name) command to
check the 32-bit host table entry on the leaf switch. As shown below, for the leaf switch, the
12
shaded field indicates that the VM with the IP address of 1.1.1.2/32 has successfully
established a host route table entry on the switch:
[user] display ip routing-table vpn-instance VPN123
Destinations : 13 Routes : 13
Destination/Mask Proto Pre Cost NextHop Interface
0.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
55.0.0.0/24 Direct 0 0 55.0.0.254 Vsi1
55.0.0.0/32 Direct 0 0 55.0.0.254 Vsi1
1.1.1.2/32 BGP 255 0 110.1.1.2 Vsi0
2. Check leaf switch configuration.
If you cannot observe the VM host table entry normally through the above steps, first confirm
that the VPN instance, VSI configuration, VSI interface configuration, and L3VNI on the leaf
switch are correct. Log in to the command line interface of the leaf switches corresponding to
the two VMs, respectively, and execute the display current-configuration command
to check that correct related configurations exist on the leaf switch. As shown below in the
shaded fields, the VPN instance name of the device is VPN101, the RD value is 1:10001, the
RT import value is 0:10001 1:10001, the RT output value is 1:10001, the VSI instance name is
SDN_VSI_101, the VXLAN is 101, the subnet gateway address is 1.1.1.1, the distributed
gateway is enabled, and the L3 VNI is 11.
[user] display current-configuration
ip vpn-instance VPN101
route-distinguisher 1:10001
description SDN_VRF_fa7585ec-893f-4309-ba7d-7ced352c96a7
address-family ipv4
vpn-target 0:10001 1:10001 import-extcommunity
vpn-target 1:10001 export-extcommunity
address-family evpn
vpn-target 0:10001 1:10001 import-extcommunity
vpn-target 1:10001 export-extcommunity
vsi SDN_VSI_101
gateway vsi-interface 1
statistics enable
arp suppression enable
flooding disable all
vxlan 101
evpn encapsulation vxlan
route-distinguisher auto
vpn-target auto export-extcommunity
vpn-target auto import-extcommunity
interface Vsi-interface0
description SDN_VRF_VSI_Interface_11
ip binding vpn-instance VPN101
l3-vni 11
interface Vsi-interface1
description SDN_VSI_Interface_123
ip binding vpn-instance VPN123
ip address 1.1.1.1 255.255.255.0 sub
mac-address 6805-ca21-d6e5
distributed-gateway local
13
If some of the above parameters are missing after the display current-configuration
command is executed, further check the corresponding configuration on IMC Orchestrator (if
the configuration of the leaf switch is deployed by IMC Orchestrator), or manually modify the
configuration of the leaf switch.
3. Check the corresponding configuration on IMC Orchestrator.
First, check IMC Orchestrator configuration (see Verifying that the correct VXLAN-related
configuration exists on the leaf switch). Then log in to the IMC Orchestrator Web interface. Click
Automation > Data Center Networks > Tenant Network > Virtual Router > Virtual Router.
Select the corresponding vRouter. Click the Edit button in the Actions column on the right, and
you can see the relevant configuration. Note that you shall select the correct VDS and tenant,
and configure the correct VRF name. In this example, the vRouter name is VPN101, the
segment ID (the L3 VNI) is 11, and the VRF name is VPN101.
After troubleshooting and modification as mentioned above, click the Edit button in the Actions
column for the vRouter to view the subnet information. Here, note that if you have configured a
subnet in the vNetwork configuration before, you must select the corresponding subnet here. If
no corresponding subnet is found selected, you can click the Add button to add it. As shown in
the following figure:
If the leaf switch still does not have the host route of the VM after troubleshooting, further check
that VSI is mapped to the corresponding VXLAN tunnel.
4. Check that VSI is mapped to the corresponding VXLAN tunnel.
Log in to the command line interfaces of the leaf switches corresponding to the two VMs, and
execute the display l2vpn vsi name verbose XXX (XXX is the VSI name, in this example,
it is SDN_VSI_123) command to check that VSI on the leaf switch is mapped to the
corresponding VXLAN tunnel. As shown below in the shaded fields, the VXLAN tunnel of the
device managed by the VSI named SDN_VSI_101 is Tunnel0, and the VXLAN ID is 101.
[user] display l2vpn vsi name SDN_VSI_101 verbose
VSI Name: SDN_VSI_123
VSI Index : 1
VSI State : Up
MTU : 1500
MAC Learning : Enabled
MAC Table Limit : -
MAC Learning rate : -
Drop Unknown : -
Flooding : Disabled
14
Statistics : Enabled
Input Statistics :
Octets :17206
Packets :235
Errors :0
Discards :0
Output Statistics :
Octets :17378
Packets :283
Errors :0
Discards :0
Gateway Interface : VSI-interface 1
VXLAN ID : 101
Tunnels:
Tunnel Name Link ID State Type Flood proxy
Tunnel0 0x5000000 UP Auto Disabled
ACs:
AC Link ID State Type
XGE1/2/5 srv101 0 UP Manual
If you find that the VSI is not mapped to the corresponding VXLAN tunnel through the above
troubleshooting steps, further execute the display bgp l2vpn evpn command on the two
leaf switches to check that related VPN and VSI routing information is available. As shown
below in the shaded fields, there is an EVPN type-3 route to the next hop 110.1.1.1 of the tunnel
port on the leaf switch, and routing information is available for a VPN instance VPN101 and a
VSI interface with the address of 1.1.1.1.
[user] display bgp l2vpn evpn
BGP local router ID is 155.1.1.1
Status codes: * - valid, > - best, d - dampened, h - history
s - suppressed, S - stale, i - internal, e - external
a - additional-path
Origin: i - IGP, e - EGP, ? - incomplete
Total number of routes from all PEs: 2
Route distinguisher: 1:101
Total number of routes: 4
Network NextHop MED LocPrf PrefVal Path/Ogn
* > [2][0][48][0023-8914-3917][0][0.0.0.0]/104
0.0.0.0 0 100 32768 i
* > [2][0][48][0023-8914-3917][32][55.0.0.3]/136
0.0.0.0 0 100 32768 i
* >i [3][0][32][110.1.1.1]/80
110.1.1.1 0 100 0 i
* > [3][0][32][155.1.1.1]/80
0.0.0.0 0 100 32768 i
Route distinguisher: 3:11(VPN101)
Total number of routes: 3
Network NextHop MED LocPrf PrefVal Path/Ogn
* > [5][0][24][55.0.0.0]/80
0.0.0.0 0 100 32768 i
* 55.0.0.254 0 32768 i
15
* > [5][0][32][1.1.1.1]/80
127.0.0.1 0 32768 i
If there is no related routing information, check the EVPN BGP neighbor state between the leaf
switch and the spine BGP route reflector (see Checking the EVPN BGP neighbor state between
the leaf switch and the spine BGP route reflector).
Verifying the security policy configuration on the IMC Orchestrator controller for the two
vPorts that need to communicate
1. Verify that IMC Orchestrator is configured with security policies on the two vPorts that need to
communicate: If the vPorts are configured with a security policy, modify the security policy to
permit the source and destination addresses of the vPorts to communicate with each other.
Alternatively, remove the security policy configuration. If the vPorts are not configured with a
security policy, or after confirming that the security policy permits traffic, go to Checking other
devices that the traffic passes through along the forwarding path, and confirming where the
traffic is lost.
2. As shown in the following figure, log in to the IMC Orchestrator Web interface. Click
Automation > Data Center Networks > Tenant Network > Virtual Port > vPorts. Select the
port to be communicated in the vPort list. Click the Edit button in the Actions column for the
vPort to check that the vPort refers to a security policy.
3. If a security policy is bound, modify the security policy rule to permit traffic: Assuming that the
address 1.1.1.2 can communicate with 1.1.1.3. Configure the ACL rule in the security policy on
port 1.1.1.2, and select Permit from Actions in the ACL rule: egress 1.1.1.3 and ingress 1.1.1.3,
or egress 0.0.0.0 and ingress 0.0.0.0. Both ingress and egress directions are indispensable.
This is the correct security rule, and the rest of the configuration is incorrect.
4. Log in to the IMC Orchestrator Web interface. Click Automation > Data Center Networks >
Tenant Network > Virtual Port > Security Policies. Click the Edit button in the Actions
column for the security policy. You can see the configured security policy ACL rules, as shown
below:
5. If the referenced security policy is modified correctly or is removed directly, or no security policy
is configured at all, or a security policy that blocks traffic is not configured but the traffic is still
blocked, go to "Checking other devices that the traffic passes through along the forwarding path,
and confirming where the traffic is lost."
16
Checking other devices that the traffic passes through along the forwarding path, and
confirming where the traffic is lost
Check other devices (network adapter of the server or network devices) that the traffic passes
through along the forwarding path, confirm where the traffic is lost, and sort through the possible
causes of packet loss on the intermediate link.
1. For the network adapter of the server, you can use the packet capture command to confirm that
the packet is sent or received, the command is as follows:
[root@jumpcontroller ~]# tcpdump –i eth0 host 99.1.1.1 –w xxx.pcap
The parameter -i is followed by the name of the network adapter that needs to capture the
packet, which can be the name of the physical network adapter or the vSwitch name. Take eth0
as an example. The host parameter is the source or destination IP address of the packet that
needs to be captured. Take 99.1.1.1 as an example. The -w parameter is to write the captured
packet as a file and save it. Take the file saved as xxx.pcap as an example.
If you execute the above command to find that the packet has not been received or sent, it may
be caused by the network adapter MTU is too small and the network adapter does not fragment
the packet by default. You can check the network adapter MTU for confirmation. Check that the
network adapters and interfaces MTU of all soft transfer devices in the forwarding path from the
source address VM to the destination address VM are too small, including VM vPorts, service
uplink ports (physical network adapters) of the hosts where the VMs are located, and soft
transfer devices (such as routers) in the forwarding path.
View the network adapter MTU of the ESXI server with the following command:
~ # esxcfg-vmknic –l
~ # esxcfg-nics –l
View the network adapter MTU of the Linux server with the following command:
[root@jumpcontroller ~]# ifconfig eth0
2. For the network devices along the way, if it is an HPE network device, you can use the following
methods to perform traffic statistics and mirror packet capture to confirm that no packet is lost.
The traffic statistics method is as follows:
a. Configure ACL, and assume that the traffic source address is 1.1.1.1 and the destination
address is 2.2.2.2:
#
acl number 3333
rule 1 permit ip source 1.1.1.1 0 destination 2.2.2.2 0
#
b. Configure the traffic classifier test and match ACL 3333:
#
traffic classifier test operator and
if-match acl 3333
#
c. Configure the traffic behavior test, and the behavior is to count the number of packets:
#
traffic behavior test
accounting packet
#
d. Configure the QoS policy test, and apply the configured traffic classifier and traffic behavior:
#
qos policy test
classifier test behavior test
#
17
e. Apply the QoS policy test to the inbound and outbound directions of the relevant interface
(interface Ten-GigabitEthernet0/0/37) to count the traffic:
#
interface Ten-GigabitEthernet0/0/37
port link-mode bridge
qos apply policy test inbound
qos apply policy test outbound
#
f. View QoS statistics accounting:
[user] display qos policy interface Ten-GigabitEthernet 0/0/37
Interface: Ten-GigabitEthernet0/0/37
Direction: Inbound
Policy: test
Classifier: test
Operator: AND
Rule(s) :
If-match acl 3333
Behavior: test
Accounting enable:
3 (Packets)
Interface: Ten-GigabitEthernet0/0/37
Direction: Outbound
Policy: test
Classifier: test
Operator: AND
Rule(s) :
If-match acl 3333
Behavior: test
Accounting enable:
3 (Packets)
Configure interface mirroring:
g. Configure local interface mirroring group 1:
#
mirroring-group 1 local
#
h. Configure the mirroring interface of mirroring group 1. Take Ten-GigabitEthernet 0/0/37
inbound and outbound directions as an example:
[user]mirroring-group 1 mirroring-port Ten-GigabitEthernet 0/0/37 both
i. Configure the monitor interface of mirroring group 1. Take Ten-GigabitEthernet 0/0/38 as an
example:
[user]mirroring-group 1 monitor-port Ten-GigabitEthernet 0/0/38
j. Check the status of mirroring group 1, mirror all the inbound and outbound traffic of
Ten-GigabitEthernet 0/0/37 to Ten-GigabitEthernet 0/0/38 for packet capture analysis:
[user] dis mirroring-group 1
Mirroring group 1:
Type: Local
Status: Active
Mirroring port:
Ten-GigabitEthernet0/0/37 Both
18
Monitor port: Ten-GigabitEthernet0/0/38
If you locate a server network adapter or a network device with packet loss using the above
method, specifically sort through the possible causes of the packet loss for the packet loss
device. Alternatively, if you cannot locate the packet lost location through the above method,
contact Technical Support for help.
Contacting Technical Support for help
If the issue persists, collect IMC Orchestrator diagnostic log, system log, and operation log
information, and then contact Technical Support for help. See "Appendix A Collecting logs" for
information collection methods.
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68
  • Page 69 69
  • Page 70 70
  • Page 71 71
  • Page 72 72
  • Page 73 73
  • Page 74 74
  • Page 75 75
  • Page 76 76
  • Page 77 77
  • Page 78 78
  • Page 79 79
  • Page 80 80
  • Page 81 81
  • Page 82 82
  • Page 83 83
  • Page 84 84
  • Page 85 85
  • Page 86 86
  • Page 87 87
  • Page 88 88
  • Page 89 89
  • Page 90 90
  • Page 91 91
  • Page 92 92
  • Page 93 93
  • Page 94 94
  • Page 95 95
  • Page 96 96
  • Page 97 97
  • Page 98 98
  • Page 99 99
  • Page 100 100
  • Page 101 101
  • Page 102 102
  • Page 103 103
  • Page 104 104
  • Page 105 105
  • Page 106 106
  • Page 107 107
  • Page 108 108
  • Page 109 109
  • Page 110 110
  • Page 111 111
  • Page 112 112
  • Page 113 113
  • Page 114 114
  • Page 115 115
  • Page 116 116
  • Page 117 117
  • Page 118 118
  • Page 119 119
  • Page 120 120
  • Page 121 121
  • Page 122 122

Aruba JL853AAE User guide

Category
Networking
Type
User guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI