Aruba IMC Orchestrator 6.2 Solution Multi-Fabric Technology White Paper User guide

Type
User guide
i
IMC Orchestrator 6.2 Solution
Multi-Fabric Technology White Paper
The information in this document is subject to change without notice.
© Copyright 2022 Hewlett Packard Enterprise Development LP
i
Contents
Overview ························································································1
Technical background ··················································································································· 1
Concepts ···································································································································· 1
Application scenarios ···················································································································· 2
Benefits ······································································································································ 2
Architecture ································································································································ 3
Key features ····················································································5
Cross-fabric deployment and connection of tenant networks/subnets ····················································· 5
Egress modes ····························································································································· 6
Single egress ······················································································································· 6
Primary/secondary egress······································································································· 7
Cross-fabric egress·············································································································· 10
Multi-service overlaying cross-fabric primary/secondary egress ···················································· 11
2+1 deployment of multi-fabric controller cluster ··············································································· 11
Cluster deployment ·············································································································· 11
Disaster recovery of cluster ··································································································· 12
Multi-fabric connection technology EVI2.0 ···················································································· 14
Cross-fabric service chain ············································································································ 17
Typical topology of the multi-fabric solution ·········································· 19
Typical multi-fabric topology ········································································································· 19
1
Overview
Technical background
To meet customers' requirements for data center (DC) capacity expansion and disaster recovery
and backup of services in different areas, IMC Orchestrator has extended its management objects
from a traditional single fabric to multiple fabrics in different areas. This breaks the traditional
constraints of physical distance and expands the service scope of the DC. It also enables
customers to connect and share network resources distributed among different physical fabrics to
improve the resource efficiency and flexibility. The multi-fabric solution applies to service scenarios
such as cross-fabric cluster deployment, cross-fabric VM deployment, and network-level
primary/backup disaster recovery.
Concepts
ED
An ED refers to an edge device that connects fabrics.
Fabric
A fabric refers to a network that manages a complete suite of network devices, such as spine
devices, leaf devices, and external gateways. All devices in a fabric are reachable to one
another.
Fabric connection
A fabric connection refers to a connection set up between fabrics. IMC Orchestrator deploys
BGP peer configuration through EDs to connect fabrics.
Figure 1 Multi-fabric topology
Figure 2 shows virtual network elements (NEs) abstracted by the controller.
TenantTenant.
vRouterA logical Layer 3 gateway/network, distributed on virtual devices.
NetworkA virtual Layer 2 isolated network, which can be considered a virtual or logical
switch.
2
SubnetAn IPv4 or IPv6 address block, corresponding to a Layer 3 subnet.
vPortA virtual or logical switch port.
Figure 2 vRouter model
Border gateway
A border gateway is used when NEs in a DC access an external network. A border gateway
includes gateway members. Each gateway member corresponds to a device group. Members
of different border gateways can be the same device group.
Application scenarios
In view of network scale optimization, service expansion, and service disaster recovery and backup,
customers generally divide a physical network into multiple sub-areas to build a multi-fabric network.
They can then manage these fabrics through a single controller cluster. Common multi-fabric
scenarios include:
Single DC with multiple sub-areas
For a single DC, customers require to expand the capacity to adapt to service planning or
development and therefore build the physical network based on multi-fabric topology.
For example, a customer has a financial department and a technical department. For security
of the financial system, the financial system must be deployed in a separate sub-area
physically and logically. This calls for a multi-fabric network that contains a single DC and
multiple sub-areas.
Multiple DCs in the same city
To meet requirements for active-active data centers and data center backup, build a cross-DC
physical network based on multi-fabric topology.
For example, a customer raises high requirements for the reliability of its core service system.
To meet the requirements, deploy active-active DCs in the same city and deploy the core
services in both DCs to ensure service continuity upon breakdown of any DC.
Benefits
Easy expansion
A customer has a DC already. To meet the requirements of service expansion, the customer
can build another DC without changing the network topology of the original DC. The two DCs
have respective Spine devices, Leaf devices, and gateways. The two DCs can be connected
with the multi-fabric solution.
Unified management
3
Multiple fabrics are managed through a single controller cluster to save purchase cost.
Resources, devices, and tenants are managed uniformly.
Resource sharing
Resources of multiple fabrics are connected and used based on needs. Network capacity can
be expanded at the fabric level or device level.
Faulty domain isolation
In a network where two racks or two DCs are located far from each other, not all Leaf devices
and Spine devices can be connected in full-mesh mode. With the multi-fabric solution, a faulty
domain can be isolated in its fabric to facilitate problem locating and troubleshooting.
Primary/secondary egress
Primary/secondary egress can be implemented at the vRouter level. When the primary egress
fails, services will automatically switch to the secondary egress to ensure reliability of outgoing
traffic.
Architecture
Figure 3 Architecture of the multi-fabric solution
The multi-fabric solution uses EVPN technology to build an Overlay network of DCs. By
coordinating network devices with the controller, users can complete automatic deployment of a
basic network with a few clicks and view the automation procedure, and maintain and monitor the
physical network and virtual network. The Overlay configurations of the fabrics are deployed by the
IMC Orchestrator controller cluster. The fabrics are connected through VXLAN tunnels at Layer 2 to
implement Layer 2 and Layer 3 connectivity. VPCs and subnets can be deployed across fabrics,
Cloud(OpenStack)
IMC PLAT
Security
WAN area
Leaf Leaf Border
-------------
------------- -------------
-------------
Fabric
WAN
Overlay
Leaf
-------------
------------- -------------
-------------
Fabric
Leaf
Third party
WAN area
Border
WAN
Overlay
Core switching area
Server
VM/Container/
Bare metal
Openflow & Netconf Openflow & Netconf
Server
VM/Container/
Bare metal Server
VM/Container/
Bare metal
Server
VM/Container/
Bare metal
Intra DC Intra DCDCI
4
and network resource pools can be managed uniformly. Multiple fabrics are connected through
EVI2.0. Automated Overlay deployment can be implemented across fabrics through EVI2.0.
The controller provides multiple egresses through the primary/secondary border gateways. This
feature enables services to be switched to the secondary egress when the primary egress fails.
Alternatively, outgoing traffic of multiple fabrics can be transmitted through the egress in a single
fabric. Customers can select an egress mode as needed.
The IMC Orchestrator cluster can be deployed in a fabric or across fabrics, meeting customers'
requirements.
IMC Orchestrator Neutron Plugin is used to connect fabric network devices to cloud platforms,
including native OpenStack, and OpenStack-based third-party cloud platforms.
5
Key features
Cross-fabric deployment and connection of
tenant networks/subnets
Figure 4 Cross-fabric deployment and connection of tenant networks/subnets
Cross-fabric Layer 2 connection in a subnet
If VMs of different fabrics belong to the same subnet, after VMs are connected through vPorts,
the controller deploys Layer 2 connection related parameters through the default EDs in their
respective fabrics. For example, vPort 1 and vPort 2 belong to the same subnet. After vPort 1
comes online in fabric 1 and vPort 2 comes online in fabric 2, the controller deploys Layer 2
connection related parameters through the default EDs in their respective fabrics.
Cross-fabric Layer 3 connection of different subnets in a vRouter
If vPort 1 comes online in fabric 1 and vPort 3 comes online in fabric 2, the controller identifies
whether vPort 1 and vPort 3 belong to the same vRouter. If yes, the controller deploys Layer 3
connection related parameters through the default EDs of the respective fabrics.
Cross-fabric Layer 3 connection of different vRouters
If vPort 1 on vRouter 1 comes online in fabric1 and vPort 4 on vRouter 2 comes online in fabric
2, the controller must onboard the vPorts first. If you configure a virtual router link between
vRouter 1 and vRouter 2 on the controller, the controller deploys Layer 3 connection related
parameters and virtual router link related parameters through the default EDs of the respective
fabrics.
6
Egress modes
Single egress
Figure 5 Multi-fabric topology with single egress
vRouter 1 of fabric 1 can be configured with the border egress in fabric 1. vRouter 2 of fabric 2 can
be configured with the border egress in fabric 1.
Different fabrics are connected through EDs. In each fabric, an independent Spine-Leaf switch
network is deployed. IMC Orchestrator can be deployed in any fabric to connect and manage
network devices in the fabrics through the management network. The border gateways and firewalls
of the two fabrics are uniformly deployed in fabric 1 at the left side.
On the Underlay network, routing protocols such as OSPF are deployed in each fabric to enable
loopback addresses of network devices in the fabric to be pinged. Between fabrics, loopback
address routes of network devices are transmitted through DCI tunnels. On the VXLAN Overlay
network where distributed VXLAN gateways are deployed, the east-west gateways are deployed on
the Leaf switches of the local fabric, and the north-south border gateways and firewalls are
uniformly deployed in a fabric (for example, fabric 1 in Figure 5).
Different fabrics are connected on Layer 2 through end-to-end VXLAN tunnels. EVPN is deployed in
each fabric and between fabrics to enable the end-to-end VXLAN tunnels to carry Layer 2
connection services. The technical architecture used in each fabric is the same as that used
between fabrics. Therefore, EDs can synchronize routing information between the fabrics through
FW
Internet
Border
Spine Spine
Service Leaf
Leaf
Fabric
Border FW
Spine Spine
Service Leaf
Leaf
Fabric
FW
IPS
LB
Service area
FW
IPS
LB
Service area
vSwitch
VM
VM
VM
Server
vSwitch
VM
VM
VM
Server
7
routing protocols. The EDs synchronize Overlay routes between fabrics. An ED must be configured
with the BGP neighbor relationship with the RR in the local fabric and its remote ED. Typically, the
neighbor relationship between an ED and the RR in the local fabric is based on IBGP, and that
between an ED and its remote ED is based on EBGP.
When distributed VXLAN gateways are deployed, if the source and destination IP addresses of the
Layer 3 east-west access traffic belong to the same fabric, and the traffic does not need to be
filtered by a firewall, the distributed gateways in the fabric forward the traffic on Layer 3 without
passing through the other fabric. For the north-south access traffic, if the traffic needs to be filtered
by a firewall, the traffic must be forwarded to the single border gateway/service Leaf device and
firewall. In Figure 5, north-south access traffic in fabric 2 must be forwarded to fabric 1 at the left
side.
Primary/secondary egress
Figure 6 Multi-fabric topology with primary/secondary egress
The major difference of the primary/secondary egress mode from the single egress mode is that
border gateways and firewalls are deployed in two fabrics separately. The two egresses work in
primary/backup mode to implement high availability of services.
When configuring border gateways on IMC Orchestrator, add two members to each border gateway,
and deploy routes with different priorities to decide the egress to use. The egress with a higher
priority is the primary egress, and the egress with a lower priority is the secondary egress.
Specifically, deploy AS_PATH related policy-based routing on the backup border device in fabric 2,
increase the AS_PATH value to reduce its priority, and reference the routing policy in the VPN.
FW
Internet
Border
Spine Spine
Service Leaf
Leaf
Fabric
Border FW
Spine Spine
Service Leaf
Leaf
Fabric
FW
IPS
LB
Service area
FW
IPS
LB
Service area
vSwitch
VM
VM
VM
Server
vSwitch
VM
VM
VM
Server
InternetSecondaryPrimary
8
Then, the route preference of the backup border device is lower than that of the primary border
device in fabric 1. Because the route preference learned from the border device in fabric 1 is higher
than that received from the peer ED, the route learned from the border device in fabric 1 takes effect
finally.
The egress in fabric 1 is used as primary egress of vRouter1. When the primary egress fails, traffic
automatically switches to the secondary egress in fabric 2 to ensure high reliability and continuity of
services. Specifically, assign the uplink port and downlink port to a monitor link group on the border
device, and assign the uplink port and downlink port to the interface collaboration group on the
firewall. When the primary link fails, services are normally switched to the backup link.
Optimal egress
Figure 7 Multi-fabric topology with optimal egress
For cross-fabric VPC deployment, outbound traffic from the local fabric is preferentially forwarded
through the local fabric. When the border egress of the local fabric fails, outbound traffic is
forwarded through the backup egress of another fabric.
A shown in Figure 7, VPC1 is deployed in two fabrics. In fabric 1, hosts on network 10.1.1.0/24 are
deployed. In fabric 2, hosts on network 10.1.2.0/24 are deployed. When hosts in fabric 1 access the
external network, traffic is forwarded through the border egress of the local fabric. External network
traffic destined to hosts on network 10.1.1.0/24 is forwarded through border egress of fabric 1.
9
Figure 8 Optimal egress implementation
To implement optimal egress, perform the following tasks:
Bind different gateways in the two fabrics through the router, and configure the same priority
for the gateways.
Establish EBGP peers between the two fabrics.
Configure a default route pointing to the external network on the border egress of each fabric.
In the default route that fabric 1 receives from fabric 2, an additional AS number is added. So
for fabric 1, the local default route has higher priority.
Configure static routes on the PE for the following purposes:
Preferentially forward external network traffic destined to fabric 1 through the border egress
of fabric 1.
Preferentially forward external network traffic destined to fabric 2 through the border egress
of fabric 2.
10
Cross-fabric egress
Figure 9 Multi-fabric topology with cross-fabric egress
vRouter 1 in fabric 1 can be configured with the border gateway in fabric 1. vRouter 2 in fabric 1 can
be configured with the border gateway in fabric 2.
FW
Internet
Border
Spine Spine
Service Leaf
Leaf
Fabric
Border FW
Spine Spine
Service Leaf
Leaf
Fabric
FW
IPS
LB
Service area
FW
IPS
LB
Service area
vSwitch
VM
VM
VM
Server
vSwitch
VM
VM
VM
Server
InternetSecondaryPrimary
11
Multi-service overlaying cross-fabric primary/secondary
egress
Figure 10 Multi-service egresses overlaying cross-fabric primary/secondary egress
The primary egresses for service A of tenant 1 are three egresses in fabric1, including
primary/secondary egress through firewall and Internet, DCN egress through firewall, and direct BN
egress. When the border device in fabric 1 fails, services automatically switch to the three
secondary egresses in fabric 2.
The primary egresses for service B of tenant 2 are three egresses in fabric2, including
primary/secondary egress through firewall and Internet, DCN egress through firewall, and direct BN
egress. When the border device in fabric 2 fails, services automatically switch to the three egresses
in fabric 1.
2+1 deployment of multi-fabric controller cluster
Cluster deployment
To support disaster recovery and backup, as a best practice, deploy DC controllers of a cluster in
different fabrics and reserve a server as a backup node (controller 4 in Figure 11). When the cluster
operates normally, you do not need to power on the backup node. If multiple servers in the cluster
fail, causing cluster failure, the backup node can join the cluster to quickly recover services after it is
manually powered on.
FW
Internet
Border
Spine Spine
Service Leaf
Leaf
Fabric
Border FW
Spine Spine
Service Leaf
Leaf
Fabric
FW
IPS
LB
Service area
FW
IPS
LB
Service area
vSwitch
VM
VM
VM
Server
vSwitch
VM
VM
VM
Server
Internet
VPN VPN
12
Figure 11 Disaster recovery network
Disaster recovery of cluster
Single-node failure
Figure 12 Single-node failure
When a single node fails, the entire cluster is not affected and can operate normally, as shown in
Figure 12. Users should repair controller 3 or the network connecting to controller 3 immediately to
avoid impacting the processing performance of the entire cluster. After controller 3 is repaired, it can
automatically join the cluster and synchronize the up-to-date service data from the cluster to ensure
service data consistency of the entire cluster.
Fabric 1
Controller 1 Controller 2
Inter-fabric device group
Fabric 2
Controller 3 Controller 4
Inter-fabric device group
13
Network failure between fabrics
Figure 13 Network failure between fabrics
If the network between the fabrics fails, the controller system in fabric 1 considers that DC controller
3 is offline. The entire cluster operates normally and is not affected. DC controller 3 considers that it
has left the cluster and works in standalone mode, allowing users to log in and view its configuration.
However, DC controller 3 does not allow users to configure the devices. Otherwise, configurations
will conflict. Users should repair the network between fabrics immediately to enable DC controller 3
to join the cluster again and avoid impacting the processing performance of the entire cluster
system.
Dual-node failure
Figure 14 Dual-node failure
In the cluster that contains three leader nodes, if two of them fail, more than half of the nodes in the
cluster fail. In this case, the cluster cannot normally operate. Users should resolve the fault
immediately. In the cluster, only DC controller 3 can be logged in. DC controller 3 switches to the
emergency mode and provides the read-only function, allowing users to view and restore
configuration data.
Fabric 1
Controller 1 Controller 2
Inter-fabric device group
Fabric 2
Controller 3 Controller 4
Inter-fabric device group
X
Fabric 1
Controller 1 Controller 2
Inter-fabric device group
Fabric 2
Controller 3 Controller 4
Inter-fabric device group
X X
14
Figure 15 Dual-node failure recovery workflow
Figure 15 shows the disaster recovery workflow in case of dual-node failure.
1. When controller 1 and controller 2 fail concurrently, controller 3 detects that the two peer
controllers fail and automatically switches to the emergency mode. In this mode, controller 3
allows users to view configuration data, but does not allow users to deploy configuration to
avoid configuration inconsistency.
2. Power on and start the standby controller (make sure the standby controller has HPE Installer
installed). Then, the standby controller joins the cluster as controller 1. Therefore, the IP
address, host name, and NIC name of the standby controller must be consistent with those of
controller 1.
3. Log in to HPE Installer of controller 3 to recover the faulty nodes. During recovery, make sure
controller 1 and controller 2 are powered off or disconnected from the network. This prevents
the faulty controllers from connecting to the cluster and avoids impact caused by network
flapping.
4. Before the system is recovered, log in to the normal controller 3 to view configuration data.
Users can log in to HPE Installer to recover the system. After the standby controller joins the
cluster, the cluster can recover to available status and normally deploy configuration. After the
cluster is normal, users can repair and recover the original physical servers. If a new physical
server is used to replace the faulty controller 2, log in to HPE Installer to repair it. If the file
system of the original controller 2 can be recovered and the controller can be started, the
controller can automatically join the cluster after it is powered on and started. Then, the cluster
is recovered to the normal status with three leader nodes operating normally.
Multi-fabric connection technology EVI2.0
EVI2.0 uses EVPN to realize VXLAN multi-fabric connection. Ethernet virtual private network
(EVPN) is a Layer 2 VPN technology that uses MP-BGP to advertise EVPN routing information in
Fabric 1
Controller 1 Controller 2
Inter-fabric device group
Fabric 2
Controller 3 Controller 4
Inter-fabric device group
XX
Recover original controller 1 or
controller 2 to recover the cluster
Controller 3 switches
to standalone and
read-only status
Power on and start
standby controller 4
Log in to the H3C Installer page of
controller 3, and add the standby
controller 4 into the system
4 1 2
3
15
the control plane and uses VXLAN encapsulation to forward packets in the data plane. A new
subsequent address family EVPN address family, is defined based on MP-BGP under the L2VPN
address family, and five types of EVPN network layer reachability information (NLRI) are added. A
VTEP automatically discovers a remote VTEP through the EVPN routing information advertised by
MP-BGP, and creates a VXLAN tunnel with the remote VTEP.
Typical network 1 of EVI2.0: VXLAN multi-fabric connection through a single-hop VXLAN
tunnel
As shown in Figure 16, multiple fabrics create a VXLAN tunnel by using EVPN, and forward traffic
based on the routing table. An EVI-ED is connected to the Spine devices in its VXLAN DC to
establish IBGP neighbor relationship and connected to a remote EVI-ED to establish EBGP
neighbor relationship. When advertising a route to an EBGP neighbor, an EVI-ED does not modify
the next hop of the route. The EVI-EDs create a single-hop VXLAN tunnel between Leaf nodes in
different DCs. However, the EVI-EDs do not create a VXLAN tunnel between themselves.
In single VXLAN tunnel mode, nodes can be manually deployed only, and cannot be automatically
deployed by controllers.
Figure 16 Single-hop VXLAN tunnel network
Typical network 2 of EVI2.0: VXLAN multi-fabric connection through multi-hop VXLAN
tunnels
As shown in Figure 17, multiple fabrics create VXLAN tunnels by using EVPN, and forward traffic
based on the routing table. An EVI-ED is connected to the Spine-RR in its VXLAN DC to establish
IBGP neighbor relationship. When advertising a route to its RR, an EVI-ED changes the next hop to
a local address. An EVI-ED establishes EBGP neighbor relationship with its remote EVI-ED. When
advertising a route to its remote EVI-ED, an EVI-ED changes the Router MAC to a local MAC
address. The first-hop VXLAN tunnel is created between a leaf device and an EVI-ED, the
second-hop VXLAN tunnel is created between EVI-EDs, and the third-hop tunnel is created
between the other EVI-ED and a leaf device.
FW
IPS
LB
Service area
Security
WAN area
Server
Leaf Service Leaf Border/ED
-------------
------------- -------------
-------------
Fabric
WAN
Overlay
FW
IPS
LB
Service area
Service Leaf
-------------
------------- -------------
-------------
Fabric
Server
Leaf
Third party
WAN area
Border/ED
WAN
Overlay
Core switching area
VXLAN tunnel
16
Figure 17 Multi-hop VXLAN tunnel network
The controller can automatically deploy Overlay configurations to cross-fabric ED devices.
The solution recommends the multi-hop VXLAN tunnel network. Table 1 shows its advantages over
the single-hop VXLAN tunnel network.
Table 1 Comparison in detail
Index
Single-hop network
Multi-hop network
Requirement on the number of
VXLAN tunnels
High (Leaf devices are connected
in Full Mesh across DCs in
extreme environments)
Low (Leaf devices are
connected in Full Mesh mode in
their local DC)
VXLAN planning requirement
VXLANs are uniformly planned for
different DCs.
Not required. VXLANs can be
planned for DCs separately.
Network scale
Small- and medium-sized
Large-sized
Support for management domain
and faulty domain isolation
No
Yes
Technical complexity
Low
Relatively high
FW
IPS
LB
Service area
Security
WAN area
Server
Leaf Service Leaf Border/ED
-------------
------------- -------------
-------------
Fabric
WAN
Overlay
FW
IPS
LB
Service area
Service Leaf
-------------
------------- -------------
-------------
Fabric
Server
Leaf
Third party
WAN area
Border/ED
WAN
Overlay
Core switching area
VXLAN tunnel
17
Cross-fabric service chain
Figure 18 Cross-fabric service chain
Data packets must pass different service nodes when they are transmitted on a traditional network.
This ensures that the network provides users with secure, quick, and stable network services in
accordance with the design requirements. The network traffic passes through these service nodes
(typically, security devices such as firewalls, load balancers, and third-party security devices) in
accordance with the preset order required by the service logic. This workflow constitutes a service
chain.
In the IMC Orchestrator SDN network, the Overlay network is separated from the Underlay network,
and the Overlay network is carried by the Underlay network. HPE IMC Orchestrator can guide traffic
to pass through service nodes and forward traffic to these service nodes in a flexible, convenient,
efficient, and secure way. The entire process is irrelevant to the network topology and constitutes
the service function chaining of the Overlay network defined in SDN.
According to the deployment method, cross-fabric service chains include the following types:
The security nodes of a service chain are in one fabric. Only one DC/fabric service chain is
selected. Traffic on the service chain is marked and forwarded as follows:
a. The local Service-leaf device classifies the traffic, attaches a service chain label to the
traffic, and specifies a next hop.
b. The ED node forwards DCI traffic.
Additionally, the ED can act as a proxy forwarding node, which can identify the service chain
label in the traffic, and specify the next hop for the traffic. For example, if the source signature
group and the service chain are in the same DC, the reverse service chain must support the
proxy forwarding node to identify the service chain label in the traffic, remark the service chain
label, and specify the next hop.
The security nodes of a service chain are in two DCs/fabrics. Traffic on the service chain is
marked and forwarded as follows:
FW
Service chain
Security
WAN area
Leaf Service Leaf Border/ED
-------------
------------- -------------
-------------
Fabric
WAN
Overlay
Service Leaf
-------------
------------- -------------
-------------
Fabric
Leaf
Third party
WAN area
Border/ED
WAN
Overlay
Core switching area
FW
Service chain
vSwitch
VM
VM
VM
Server
vSwitch
VM
VM
VM
Server
Openflow & Netconf Openflow & Netconf
Intra DC Intra DCDCI
18
a. The local Service-leaf device classifies the traffic, attaches a service chain label to the
traffic, and specifies the next hop.
b. The ED node forwards DCI traffic. Additionally, the ED can act as a proxy forwarding node,
which can identify the service chain label in the traffic, and specify the next hop for the
traffic.
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21

Aruba IMC Orchestrator 6.2 Solution Multi-Fabric Technology White Paper User guide

Type
User guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI