Aruba IMC Orchestrator 6.3 Solution Emergency Response and Recovery User guide

Type
User guide
IMC Orchestrator 6.3 Solution
Emergency Response and Recovery Guide
The information in this document is subject to change without notice.
© Copyright 2023 Hewlett Packard Enterprise Development LP
i
Contents
About emergency response and recovery ··············································2
Definition ···································································································································· 2
Scenarios ··································································································································· 2
Principles ··································································································································· 2
Emergency response and recovery workflow ··········································4
Notify a fault ································································································································ 4
Collect fault information ················································································································· 4
Preliminarily locate the fault by using a tool ······················································································· 5
Continue with troubleshooting based on the scenario ·········································································· 5
Seek for help ······························································································································· 5
View the troubleshooting result ······································································································· 5
Record emergency maintenance information ····················································································· 5
Preliminarily locate the fault by using a tool ············································6
Underlay network check ················································································································ 6
Loop detection ····························································································································· 6
Radar detection ··························································································································· 6
AC interface traffic statistics ··········································································································· 7
Device capacity management ········································································································· 7
Controller configuration auditing ······································································································ 7
Network overlay scenario ···································································9
Network topology ························································································································· 9
Traffic model ······························································································································· 9
Troubleshooting and recovery procedures for common issues ···························································· 10
Large-scale production service failure ····················································································· 10
Large-scale production service failure on one leaf ····································································· 11
Failure of some production services ························································································ 11
Failure of east-west Layer 2 production services across leaf devices············································· 11
Failure of the gateway, north-south, and cross-VPN Layer 3 production services ····························· 13
Network overlay + device incorporation scenario ··································· 15
Network topology ······················································································································· 15
Traffic model ····························································································································· 15
Troubleshooting and recovery procedures for common issues ···························································· 16
Large-scale failure of production services ················································································ 16
Failure of some production services ························································································ 17
Context is deleted by mistake or lost ······················································································· 17
Network overlay + PBR service chain scenario ····································· 18
Network topology ······················································································································· 18
Traffic model ····························································································································· 18
Troubleshooting and recovery procedures for common issues ···························································· 20
Failure of a multi-hop service chain ························································································· 20
Failure of a single-hop service chain ······················································································· 20
Network overlay + multiple service egresses scenario ···························· 21
Network topology ······················································································································· 21
Traffic model ····························································································································· 22
Troubleshooting and recovery procedures for common issues ···························································· 22
Failure to access external network 1 (SNAT) ············································································ 22
Failure to access external network 2 (without SNAT) ·································································· 23
Failure to access external network 3 (without FW egress) ··························································· 23
Network overlay multi-fabric scenario ················································· 25
Network topology ······················································································································· 25
ii
Traffic model ····························································································································· 26
Troubleshooting and recovery procedures for common issues ···························································· 26
Cross-fabric east-west service failure ······················································································ 26
Cross-fabric north-south service failure ···················································································· 26
Scenario in which the controller incorporated device configuration is lost ··· 28
Troubleshooting and recovery procedures for common issues ···························································· 28
Emergency recovery for loss of key device configuration caused by software failure or misoperation ·· 28
About emergency response and recovery
Definition
Emergency response and recovery is a maintenance measure for sudden failures. It can quickly
remove faults, recover services, and reduce losses when the system encounters a sudden fault
such as power failure of equipment and service interruption.
Scenarios
Main causes for emergencies include hardware, software, and line failures, misoperation, and
natural disasters. Scenarios that trigger emergency response and recovery include:
User complaints
Users complain about sudden failures or interruption in data center services. This is the main
scenario that triggers emergency response and recovery.
System alarms
A system component generates an alarm that indicates large-scale service failures.
Natural disasters
Some devices are powered off to protect the system when a natural disaster such as
earthquake, cold wave, flood, or fire occurs. The power supply will be resumed after the
disaster.
Principles
Emergencies often result in severe consequences such as massive network failure and service
interruption of application systems. To improve emergency handling efficiency and reduce losses,
fully consider and follow the basic principles below before performing emergency response and
recovery:
To ensure stable system operation and minimize the occurrence of emergencies, perform
regular system inspection and maintenance.
The core objective of emergency response and recovery is to recover user services as soon as
possible. To improve emergency handling efficiency, operations and maintenance staff must
receive necessary training, learn this document, and gain basic methods and skills for handling
emergencies before they start to work.
Develop emergency response plans in time based on this document, and require operations
and maintenance staff to learn and test the plans regularly.
Analyze and locate faults based on the scenario and symptom before performing emergency
response and recovery to avoid escalation of issues.
Assess the impact of a fault before determining the troubleshooting plan.
Record all issues encountered during troubleshooting in detail.
iii
Record the symptom, causes, and fault recovery process in detail for future reference.
During emergency response and recovery, contact the company's customer service center or
local office in time to obtain technical support.
4
Emergency response and recovery
workflow
The main purpose of emergency response and recovery is to locate and isolate faults and recover
user services as soon as possible. Figure 1 shows the emergency response and recovery workflow.
Figure 1 Emergency response and recovery workflow
Notify a fault
In case of an emergency, immediately notify Technical Support as quick as possible.
Collect fault information
When a fault occurs, collect the following information as soon as possible for subsequent fault
locating and recovery:
Time when the fault occurs.
5
Scenario in which the fault occurs: network overlay or hybrid overlay and whether device
incorporation, PBR service chains, multiple egresses, and multiple fabrics are involved.
Impact of the fault: number of affected tenants, number of affected subnets, whether east-west
traffic or north-south traffic is affected.
Preliminarily locate the fault by using a tool
After collecting the fault information, try to preliminarily locate the fault by using the operations and
maintenance tool provided by the SDN controller. For more information, see "Preliminarily locate the
fault by using a tool."
Continue with troubleshooting based on the
scenario
After preliminarily locating the fault, further analyze the fault based on locating and scenario
information, and try to recover services.
Seek for help
Contact Technical Support for help.
View the troubleshooting result
After services are recovered, view alarms, topology, and device status information to ensure that
services are normal. In addition, monitor the system during peak hours to ensure that issues (if any)
can be immediately resolved.
Record emergency maintenance information
Record information about this maintenance, including maintenance time, system version, fault
symptom, troubleshooting method and result, and remaining issues. Table 1 gives an example of
information recording.
Table 1 Emergency maintenance record
Basic fault
information
Site name
Recorded At
Fault Occurred At
Involved product
and version
Current status of
the fault
Currently handled
by
Description
Troubleshooting
method and result
Remaining issues
6
Preliminarily locate the fault by using a
tool
Based on the collected information, use the operations and maintenance tools provided by the
controller to quickly locate the fault and narrow the troubleshooting scope. After locating the fault
preliminarily, resolve the fault based on the scenario and fault locating information.
Underlay network check
If multiple services suddenly fail, first determine whether a fault occurs on the underlay network.
The underlay network check feature provided by the controller enables you to quickly find out
connectivity issues on the underlay network.
On a per-fabric basis, the underlay network check feature verifies the underlay network
configuration and connectivity of switches to quickly locate any possible fault on the underlay
network. The following items will be verified:
Link connectivity between devices.
Whether a black hole route exists on the network.
Whether a route loop exists on the network.
Network configuration.
If any fault is found, perform further troubleshooting based on the message displayed.
Loop detection
When some services in a fabric fail, first determine whether a loop exists. A loop generally affects
several leaf devices and causes failure of some services.
Loop detection can find out Layer 2 loops on both the underlay and overlay networks. You can
enable loop detection on the Analytics > DC Network Check > Loop Detection page.
If a Layer 2 loop occurs across multiple physical interfaces of a leaf device or across multiple leaf
devices on the underlay network, loop detection can automatically detect the abnormal devices.
If a Layer 2 loop occurs in a vNetwork on the overlay network, loop detection can automatically
detect the abnormal devices.
Radar detection
Single-path detection
The single-path detection feature provided by the controller simulates a VM to send
TCP/UDP/ICMP packet-out packets to determine whether all devices between the source and
the destination receive the detection packets.
If the impact scope of the fault cannot be determined, use single-path detection to check
whether VMs in other tenants and subnets are normal to determine whether the fault is in a
specific tenant or subnet or affects multiple tenants and subnets.
If the east-west traffic between two VMs is interrupted or the north-south traffic of a VM is
interrupted, use single-path detection to find the specific device where the traffic is interrupted.
When the traffic between two VMs is interrupted and the two VMs access different leaf devices,
perform single-path detection using the two VMs as the source in turn to check whether the
7
two VMs can reach VMs that access a third leaf device. This helps determine whether the fault
exists on a specific leaf device.
Multi-path detection
The underlay network of the data center is designed as an IP ECMP network. Multiple
interworking paths exist between virtual tunnel end points (VTEPs). Different service flows can
be distributed to different paths. If some services between two VMs are abnormal, congestion
might have occurred on some paths between VTEPs. In this case, use multi-path detection to
determine whether packet loss exists on a path.
AC interface traffic statistics
In the network overlay scenario, a VM is connected to the Server Leaf through the AC interface.
When the traffic of a VM becomes abnormal, access the Assurance > Networks > Statistics >
VNI Flow Statistics page on the controller and enable AC interface traffic statistics. Based on the
VNI, IP address of Server Leaf, and physical port for access, find the statistics information for the
corresponding AC interface and view the transmitted and received packets on the AC interface.
When the outbound traffic statistics for the AC interface that connects the leaf to VM are abnormal,
you can preliminarily determine that traffic forwarding on the port or device is abnormal. If the
inbound traffic statistics are abnormal, the VM or host might have failed. In this case, contact IT staff
to further confirm the issue.
If the traffic between two VMs is abnormal, check whether the inbound traffic statistics for the AC
interface of the source VM are consistent with the outbound traffic statistics for the AC interface of
the destination VM to narrow the troubleshooting scope.
Device capacity management
When some services fail, the hardware resources of some devices in the fabric might have been
exhausted. In this case, select Assurance > Network Monitoring > Resources Capacity on the
controller and check whether key resources (for example, AC and ACL (OpenFlow)) of some
devices are exhausted. You can also select Assurance > Network Monitoring > Physical
Devices and check the resource usage of one device.
If the key resources of a device are exhausted, recover services through VM or server migration.
Controller configuration auditing
In some scenarios, a device might be inconsistent with the controller in configuration. The
configuration auditing feature can quickly find such issues.
Select Automation > Data Center Networks > Resource Pools > Devices > Physical Devices
and view the audit result displayed in the Data Synchronization State column. To view the most
recent audit result of a device, you can click the icon in the Data Synchronization State column
and then click Audit to audit the device manually.
The audit result field might display , , or .
If the icon is displayed, the controller has more or different configuration than the device.
In this case, view the details to determine whether the configuration affects services. If the
configuration affects services, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services to recover services.
If the icon is displayed, the device has more configuration than the controller. In this case,
view the details to determine whether the configuration affects services.
8
If the icon is displayed, the configuration on the device is consistent with that on the
controller. Perform other checks to determine whether no configuration is issued and no
configuration data exists on the controller memory or whether the forwarding layer affects
services.
9
Network overlay scenario
Network topology
Network overlay adopts the spine-leaf structure. A spine acts as an IBGP RR. A leaf is an M-LAG
system. A border is an M-LAG system that interconnects the internal and external networks. A leaf
accesses the computing server through a dynamic aggregate interface.
The control center for network overlay is the controller, which is deployed on IMC PLAT as a
component and communicates with a switch through the management network.
In a network overlay, leaf and border devices are VTEP nodes, and VXLAN tunnels are established
through BGP.
Figure 2 Network overlay
Traffic model
The network overlay traffic includes east-west inter-leaf communication, east-west intra-leaf
communication, and north-south communication.
10
Figure 3 Network overlay traffic model
Troubleshooting and recovery procedures for
common issues
Large-scale production service failure
Multiple tenants and services distributed on multiple leaf devices involve communication with the
gateway. All or most of the north-south Layer 3 traffic or east-west Layer 2 and Layer 3 traffic is
affected and cannot be recovered within a short period. After collecting diagnosis information from
the controller, perform the following tasks:
1. Check the audit results of core spine or border switches on the controller and determine
whether any configuration is lost.
If any configuration is lost, back up the current configuration, and then perform incremental
synchronization or manually add the configuration that affects services.
2. Check whether EVPN neighbors of core spine or border switches and routing entries are
abnormal.
If a lot of neighbors are abnormal and no route exists on leaf devices, reset BGP neighbors.
3. Check whether obvious software and hardware fault alarms are generated on core spine or
border switches and whether underlay VTEP communication is normal. If core spine switches
are obviously abnormal, perform the following tasks:
Perform an active/standby line switchover for spine or border switches, restart boards,
and perform an active/standby chassis switchover.
Restart core devices in the system.
4. Host all physical NEs (ARP is available on the entire network or the number of VMs is smaller
than 5000) in black hole route or page maintenance mode.
5. If the issue persists, contact Technical Support for help.
11
Large-scale production service failure on one leaf
If services are affected on an access leaf device and cannot be recovered within a short period,
perform the following tasks:
1. Check the audit results of leaf switches on the controller and determine whether any
configuration is lost.
If any configuration is lost, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services.
2. If other leaf devices exist and the service migration conditions are met, migrate the services on
the leaf device to another leaf device to recover services.
3. Check whether EVPN neighbors of leaf switches, routing entries, and logbuffer are abnormal.
If the neighbors are abnormal and no route exists on leaf switches, reset BGP neighbors.
4. Check whether obvious software and hardware fault alarms are generated on leaf devices. If
such fault alarms are generated, perform the following tasks:
Perform an active/standby leaf line switchover, restart the chassis, and perform a chassis
switchover.
Restart the leaf.
5. If the issue persists, contact Technical Support for help.
Failure of some production services
If the production services of some tenants fail, perform the following tasks:
1. Check the audit results of core spine or border switches on the controller and the firewall to
determine whether any configuration is lost.
If any configuration is lost, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services.
Check whether any configuration has changed recently and whether the changed
configuration affects services. If services are affected, perform a rollback and delete the
related configuration.
2. Check the interconnects and perform interconnects switchover and active/standby chassis
switchover to recover services.
3. If the issue persists, contact Technical Support for help.
Failure of east-west Layer 2 production services across leaf
devices
If the gateway runs correctly but east-west Layer 2 services across leaf devices become abnormal,
perform the following tasks:
1. Check the audit results of leaf switches on the controller and determine whether any
configuration is lost.
If any configuration is lost, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services.
2. Check the VLAN-VXLAN mappings of the switch interfaces, VSI configuration, L2VPN MAC
address learning, host routing in proxy mode, and pickup table in pickup mode.
To view the VLAN-VXLAN mappings, execute the following command:
<leaf-1.10>display current-configuration interface Ten-GigabitEthernet 1/0/18
#
interface Ten-GigabitEthernet1/0/18
port link-mode bridge
12
port link-type trunk
undo port trunk permit vlan 1
port trunk permit vlan 7 11 to 12 22 //VLAN bypass
vtep access port
#
service-instance 11
encapsulation s-vid 11
xconnect vsi SDN_VSI_11 //VLAN mode, VLAN-VXLAN mapping
return
To view a VSI instance, execute the following command:
<leaf-1.10>display current-configuration configuration vsi
#
vsi SDN_VSI_11
gateway vsi-interface 1 //Bound to a VSI
statistics enable
arp suppression enable
vxlan 11 //Segment ID of the vNetwork on the controller corresponding
to the VXLAN ID
evpn encapsulation vxlan
route-distinguisher auto
vpn-target auto export-extcommunity
vpn-target auto import-extcommunity
To view L2VPN MAC address learning information, execute the following command:
<leaf-1.10>display l2vpn mac-address
MAC Address State VSI Name Link ID/Name Aging
0cda-411d-7f95 Dynamic SDN_VSI_11 XGE1/0/18 Aging
--- 1 mac address(es) found ---
<leaf-1.10>
If one entry is incorrect, perform the following tasks:
Execute the shutdown and undo shutdown commands to switch the downstream ports
on a leaf device or switch the upstream ports on the server.
Migrate the abnormal host.
If multiple entries are incorrect, perform the following tasks:
a. Execute the shutdown and undo shutdown commands to switch the downstream
ports on leaf devices in bulk.
Migrate the abnormal hosts in turn.
Reset BGP neighbors of leaf switches.
3. Check whether obvious software and hardware fault alarms are generated on leaf devices. If
such fault alarms are generated, perform the following tasks:
Perform an active/standby leaf line switchover, restart the chassis, and perform a chassis
switchover.
Restart the leaf.
4. If the issue persists, contact Technical Support for help.
13
Failure of the gateway, north-south, and cross-VPN Layer 3
production services
1. Check the audit results of all switches on the controller and determine whether any
configuration is lost.
If any configuration is lost, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services.
2. Check whether the public VPN instance, VSI instances, IP addresses of VSI interfaces, VPN
instances of interfaces, L3VNI configuration, and related routes in the tenant VPN exist.
To view the public VPN instance, execute the following command:
<leaf-1.10>display current-configuration configuration vpn-instance
#
ip vpn-instance vpn1
route-distinguisher 1:1112
description SDN_VRF_f03dafb3-31eb-4316-a179-17ec1d552b9e
#
address-family ipv4
export route-policy SDN_POLICY_IPV4_vpn1
vpn-target 0:1112 1:1112 0:2222 import-extcommunity
vpn-target 1:1112 export-extcommunity
#
address-family evpn
export route-policy SDN_POLICY_EVPN_vpn1
vpn-target 0:1112 1:1112 0:2222 import-extcommunity
vpn-target 1:1112 export-extcommunity
#
return
To view a VSI instance, execute the following command:
<leaf-1.10>display current-configuration configuration vsi
#
vsi SDN_VSI_11
gateway vsi-interface 1 //Bound to a VSI
statistics enable
arp suppression enable
vxlan 11 //Segment ID of the virtual link layer network on the
controller corresponding to the VXLAN ID
evpn encapsulation vxlan
route-distinguisher auto
vpn-target auto export-extcommunity
vpn-target auto import-extcommunity
To view the IP address of a VSI interface, VPN instances of an interface, and L3VNI
configuration, execute the following command:
<leaf-1.10>display current-configuration interface Vsi-interface
#
interface Vsi-interface0
description SDN_VRF_VSI_Interface_1112
ip binding vpn-instance vpn1
l3-vni 1112 //L3VNI of vpn1 virtual router
14
#
interface Vsi-interface1
description SDN_VSI_Interface_11
ip binding vpn-instance vpn1 //VPN instance of the interface
ip address 11.1.1.254 255.255.255.0 sub //IP address of the VSI interface
mac-address 6805-ca21-d6e5
distributed-gateway local
To view related routes in the tenant VPN, execute the following command:
<leaf-1.10>display ip routing-table vpn-instance vpn1
Destinations : 20 Routes : 21
Destination/Mask Proto Pre Cost NextHop Interface
0.0.0.0/0 BGP 255 0 102.1.1.1 Vsi3
102.1.1.1 Vsi0
11.1.1.0/24 Direct 0 0 11.1.1.254 Vsi1
11.1.1.0/32 Direct 0 0 11.1.1.254 Vsi1
11.1.1.10/32 BGP 255 0 102.1.1.1 Vsi0
11.1.1.255/32 Direct 0 0 11.1.1.254 Vsi1
12.1.1.0/24 Direct 0 0 12.1.1.254 Vsi2
12.1.1.0/32 Direct 0 0 12.1.1.254 Vsi2
12.1.1.255/32 Direct 0 0 12.1.1.254 Vsi2
22.1.1.0/24 BGP 130 0 22.1.1.254 Vsi4
22.1.1.254/32 BGP 130 0 127.0.0.1 InLoop0
If the neighbors are abnormal and no route exists on border/spine/leaf devices, reset BGP
neighbors.
3. Check whether obvious software and hardware fault alarms are generated on
border/spine/leaf devices. If such fault alarms are generated, perform the following tasks:
Perform an active/standby border/spine/leaf line switchover, restart a chassis, and
perform chassis switchover.
Restart the border/spine/leaf devices.
4. If the issue persists, contact Technical Support for help.
15
Network overlay + device incorporation
scenario
Network topology
The network overlay + device incorporation scenario adopts the spine-leaf structure. A spine acts as
an IBGP RR. A leaf is an M-LAG system. A leaf accesses the computing server through a dynamic
aggregate interface. A border is an M-LAG system that accesses the FW and LB and interconnects
the internal and external networks.
The control center for network overlay is the controller, which is deployed on IMC PLAT as a
component and communicates with a switch through the management network.
In a network overlay, leaf and border devices are VTEP nodes, and VXLAN tunnels are established
through BGP.
Figure 4 Network overlay + device incorporation network topology
Traffic model
The network overlay + device incorporation scenario involves three types of traffic: east-west traffic,
north-south FW+LB traffic, and north-south FW traffic.
16
Figure 5 Network overlay + device incorporation traffic model
Troubleshooting and recovery procedures for
common issues
Large-scale failure of production services
1. Check the audit results of core spine or border switches on the controller and the firewall to
determine whether any configuration is lost.
If any configuration is lost, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services.
Check whether any configuration has changed recently and whether services are affected.
If services are affected, perform a rollback and delete the related configuration.
2. Check whether the external network egress and devices for interconnection fail. If no failure
occurs, the security management firewall may have failed. Check whether obvious software
and hardware fault alarms are generated for the firewall, check sessions, and perform the
following tasks:
If the firewall is normal, and a session from the internal network to the external network
exists but no session in the reverse direction exists, check the route setting of the device
that connects the external network to context.
If no session from the internal network to the external network exists, check the physical
network ports and links of the tenant carrier network and replace the ports and links when
necessary to recover the services.
If few sessions exist, perform an active/standby chassis switchover to recover the services.
3. Check the audit results of core spine or border switches on the controller to determine whether
any configuration is lost.
If any configuration is lost, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services.
If no configuration is lost, go to the next step.
17
4. Check whether the EVPN neighbors and routing entries on the core spine or border switches
are normal.
If a large number of neighbors are abnormal or no routing entries learned from the leaf
devices exist, reset BGP neighbors.
If the EVPN neighbors and routing entries are normal, go to the next step.
5. Check whether an obvious software or hardware failure has occurred on the core spine or
border switches and whether the underlay VTEP communication is normal.
If an obvious failure has occurred on a core spine or border switch, perform the following tasks
for an emergency recovery:
a. Perform a primary/secondary link switchover, reboot the cards, or perform an
active/standby MPU switchover for the spine or border device.
b. Reboot the core device.
6. If the issue persists, contact Technical Support for help.
Failure of some production services
1. Check the audit results of core spine or border switches on the controller and the firewall and
determine whether any configuration is lost.
If any configuration is lost, back up the current configuration and perform incremental
synchronization or manually add the configuration that affects services.
Check whether any configuration has changed recently and whether services are affected.
If services are affected, perform a rollback and delete the related configuration.
2. Check the firewall rules of the tenant. Configure a policy that allows all traffic to pass to recover
services.
3. Check the interconnects and perform interconnects switchover and active/standby chassis
switchover to recover services.
4. If the issue persists, contact Technical Support for help.
Context is deleted by mistake or lost
1. Select Automation > Data Center Networks > Resource Pools > Devices > Physical
Devices and audit the root wall configuration of security devices. Click the icon in the Data
Synchronization State column and then click Sync Data to recreate the context that is
deleted by mistake.
2. Select Automation > Data Center Networks > Resource Pools > Devices > Virtual
Devices and click the icon and audit the recreated context. Click the icon in the Data
Synchronization State column and then click Sync Data to recover services.
3. If CloudOS or cloud environment is available, unbind the vRouter from the firewall and bind it
to the firewall again.
4. If the issue persists, contact Technical Support for help.
18
Network overlay + PBR service chain
scenario
Network topology
The network overlay + device incorporation scenario adopts the spine-leaf structure. A spine acts as
an IBGP RR. A leaf is an M-LAG system. A leaf accesses the computing server through a dynamic
aggregate interface. A border is an M-LAG system that accesses a FW or LB and interconnects the
internal and external networks. A service leaf device is an IRF fabric that accesses service nodes
such as FWs and LBs.
The control center for network overlay is the controller, which is deployed on IMC PLAT as a
component and communicates with a switch through the management network.
In a network overlay, leaf and border devices are VTEP nodes, and VXLAN tunnels are established
through BGP.
Figure 6 Network overlay + PBR service chain network topology
Traffic model
The network overlay + PBR service chain scenario involves six types of traffic: east-west FW traffic,
east-west LB traffic, east-west FW+LB traffic (multi-hop), north-south FW traffic, north-south LB
traffic, and north-south FW+LB traffic.
19
Figure 7 Network overlay + PBR service chain traffic model 1
Figure 8 Network overlay + PBR service chain traffic model 2
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29

Aruba IMC Orchestrator 6.3 Solution Emergency Response and Recovery User guide

Type
User guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI