Aruba JL851AAE Configuration Guide

Category
Networking
Type
Configuration Guide

This manual is also suitable for

i
IMC Orchestrator 6.2 OpenStack Cloud
Scenario Service Configuration Guide
The information in this document is subject to change without notice.
© Copyright 2022 Hewlett Packard Enterprise Development LP
i
Contents
Overview ························································································1
Configure basic underlay network settings ·············································2
Configure basic security device settings ················································3
Configure basic controller settings ·······················································4
Log in to the controller··················································································································· 4
Add a fabric ································································································································ 4
Configure a VDS ·························································································································· 5
Configure global parameters ·········································································································· 6
Configure security service resources ······························································································· 7
Add a border device group ············································································································· 8
Add a tenant ······························································································································· 8
Add a border gateway ··················································································································· 9
Configure basic OpenStack settings ·····················································0
Configure a hybrid overlay network ······················································3
Network planning ························································································································· 3
Network topology ·················································································································· 3
Resource plan ······················································································································ 5
Deployment workflow ···················································································································· 7
Procedure ··································································································································· 7
Configure vBGP ···················································································································· 7
Configure network overlay (non-hierarchical onboarding) ···························································· 11
Configure network overlay (hierarchical onboarding) ·································································· 20
Configure the network with direct egress ·················································································· 30
Configure the network with security egress··············································································· 34
Configure OpenStack bare metal ······················································· 43
Network planning ······················································································································· 43
Network topology ················································································································ 43
Resource plan ···················································································································· 44
Deployment workflow ·················································································································· 45
Procedure ································································································································· 45
Configure the compute node ································································································· 45
Configure bonding interfaces on the ironic node ········································································ 46
Configure IMC Orchestrator settings ······················································································· 46
Configure OpenStack resources ····························································································· 49
Configure settings in the inspection phase for bare metal nodes ··················································· 51
Configure settings in the provisioning phase for bare metal nodes ················································ 55
Configure settings in the running phase for bare metal nodes ······················································ 57
O&M monitoring ············································································· 58
1
Overview
IMC Orchestrator supports the hybrid overlay scenario of interoperating with native OpenStack. The
scenario mainly implements the hybrid traffic model containing weak-control EVPN network overlay
and strong-control host overlay. OpenStack provides rich versions. This configuration guide uses the
Rocky version as an example to describe the passthrough and secure incorporation network
scenario models.
2
Configure basic underlay network
settings
Configure and incorporate switching devices on the network. For more information, see IMC
Orchestrator 6.2 Underlay Network Configuration Guide.
3
Configure basic security device settings
Configure security devices on the network as needed. For more information, see IMC Orchestrator
6.2 Security Service Resource Configuration Guide.
4
Configure basic controller settings
Log in to the controller
After the controller is deployed, the corresponding menus are loaded in the unified platform. Log in
to the unified platform to use the controller functions.
The unified platform provides a friendly UI. To log in to the unified platform:
1. In the address bar of the browser, enter the login address (the default is
http://ip_address:30000/central/index.html) of the unified platform, and press Enter to enter
the login page.
ï‚¡ The ip_address parameter specifies the northbound service VIP of the cluster of the
Installer where the unified platform is installed.
ï‚¡ 30000 is the port number.
Add a fabric
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page. Click Add.
On the page that opens, specify the following fabric parameters:
ï‚¡ Name: fabric1.
ï‚¡ Overlay BGP AS Number: Required. The AS number must be the same as the BGP AS
number on devices in the fabric, for example, 100.
ï‚¡ Specify the other parameters as needed. In this example, use the default settings.
Figure 1 Adding a fabric
2. Click OK.
3. After the fabric is created, click the icon in the Actions column for the fabric, and then
click the Settings tab. On this tab, you can configure advanced settings for the fabric
according to the network requirements. As a best practice to reduce packet flooding in the
network, select the Unknown Unicast Suppression, Unknown Multicast Suppression,
and Broadcast Suppression options. Use the default settings for the other parameters.
5
Figure 2 Advanced configuration for a fabric
Configure a VDS
1. Navigate to the Automation > Data Center Networks > Common Network Settings >
Virtual Distributed Switch page. Click the Edit icon in the Actions column for VDS1 to enter
the page for editing VDS1. Click the Carrier Fabric tab. On this tab, add fabric fabric1.
Figure 3 Adding a fabric to a VDS
2. Click the Advanced Settings tab. On this tab, configure advanced settings for VDS1.
ï‚¡ Bridge Name: vds1-br.
ï‚¡ VXLAN Tunnel Interface Name: vxlan_vds1-br.
ï‚¡ vSwitch Learned Entries Aging Time (seconds): 300.
ï‚¡ Specify the other parameters as needed. In this example, use the default settings.
6
Figure 4 Advanced settings
Configure global parameters
1. If IPv6 services exist in the network, to ensure proper operation of IPv6 services, you must
enable global IPv6 configuration on the controller as follows:
a. Navigate to the Automation > Data Center Networks > Fabrics > Parameters page.
Click the Controller Global Settings tab.
b. Select On for the IPv6 field.
2. Disable deploying security policy flow table to switching devices as follows:
a. Navigate to the Automation > Data Center Networks > Fabrics > Parameters page.
Click the Controller Global Settings tab.
b. Select Off for the Deploy Security Policy Flow Table to Switching Devices field.
3. To generate the VRF names according to rules, you must specify the global VRF autonaming
mode on the controller as follows:
a. Navigate to the Automation > Data Center Networks > Fabrics > Parameters page.
Click the Controller Global Settings tab.
b. Select Rule-Based for the VRF Autonaming Mode field. The generated VRF names are
in the format of tenant name_router name_segment ID.
7
Figure 5 Controller global settings
Configure security service resources
According to the service gateway section for the corresponding device model in IMC Orchestrator
6.2 Security Service Resource Configuration Guide, complete the following configurations:
• Configure IP address pools. In this scenario, you must configure three IP address pools,
including the tenant carrier LB internal network address pool, the tenant carrier FW internal
network address pool, and the virtual management network address pool. See Table 1 for IP
address pool planning.
• Configure VLAN address pools. In this scenario, you must configure the tenant carrier
network VLAN pool. See Table 1 for VLAN pool planning.
• Configure and incorporate security devices on the network. Create L4-L7 resource pools and
templates. Create two address pools, FWpool1 and LBpool1.
Table 1 Security service resource table
Configuration item
Configuration example
Description
IP
address
pools
Tenant carrier
LB internal
network
• Name: Tenant carrier LB internal
network 1
• Address ranges:
ï‚¡ 10.50.1.2/24-10.50.1.254/24
ï‚¡ 2001::10:50:1:2/112-
2001::10:50:1:254/112
• Default address pool: Unselected
On an RBM network, the LB
service of a vRouter uses two
IPv4 addresses with the mask
length as 31 and uses two
IPv6 addresses with the prefix
length as 127
Tenant carrier
FW internal
network
• Name: Tenant carrier FW internal
network 1
• Address ranges:
ï‚¡ 10.60.1.2/24-10.60.1.254/24
ï‚¡ 2001::10:60:1:2/112-
2001::10:60:1:254/112
• Default address pool: Unselected
On an RBM network, the FW
service of a vRouter uses two
IPv4 addresses with the mask
length as 31 and uses two
IPv6 addresses with the prefix
length as 127
8
Configuration item
Configuration example
Description
Virtual
management
network
• Name: Virtual management network
1
• Address ranges: 192.168.10.2/24-
192.168.10.254/24
• Gateway address: 192.168.10.1
• Default address pool: Unselected
On an RBM network, either of
the primary vFW context and
secondary vFW context uses
one IPv4 address.
VLAN
pools
Tenant carrier
network
• Name: Tenant carrier vlan1
• VLAN range: 500 to 999.
• Default VLAN pool: Unselected
On an RBM network, the FW
service of one vRouter uses
one VLAN ID resource, and
the LB service of one vRouter
uses one VLAN ID resource.
Add a border device group
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page. Click the
icon in the Actions column for fabric1. Click the Border Device Groups tab.
2. Click Add. On the page that opens, configure the following parameters:
ï‚¡ Device Group Name: bdgroup1.
ï‚¡ Position: Border Gateway.
ï‚¡ HA Mode: DRNI.
ï‚¡ Connection Mode: IPs from Different Networks.
ï‚¡ Address Pools: Tenant Carrier FW Internal Network address pool 1, Tenant Carrier LB
Internal Network address pool 1, Virtual Management Network address pool 1.
ï‚¡ VLANs: VLAN 1 of the tenant carrier network type.
Figure 6 Adding a border device group
3. In the device group member area, add border devices to the border device group.
4. Click Apply in the upper right corner.
Add a tenant
1. Navigate to the Automation > Data Center Networks > Tenant Management > All Tenants
page. Click Add. On the page that opens, configure the following parameters:
9
ï‚¡ Tenant Name: tenant1.
ï‚¡ VDS Name: VDS1.
Figure 7 Adding a tenant
2. Click Apply.
Add a border gateway
This section provides only a configuration example. For detailed configuration contents and data,
see the basic controller settings section for each scenario.
1. Navigate to the Automation > Data Center Networks > Common Network Settings >
Border Gateways page. Click Add. On the page that opens, configure the following
parameters:
ï‚¡ Name: gw1.
ï‚¡ Gateway Sharing: Off.
ï‚¡ Gateway Type: Composite Gateway.
ï‚¡ Specify the other parameters as needed. In this example, use the default settings.
Figure 8 Adding a border gateway
2. Click Add Gateway Member. On the page that opens, configure the following parameters:
ï‚¡ Name: gw1member.
ï‚¡ Fabric: fabric1.
ï‚¡ Device Group: bdgroup1.
10
ï‚¡ Priority: 1.
3. You can add resources on the page for adding or editing a gateway member. On this page,
configure the following parameters:
ï‚¡ Service Type: Options include Virtual Firewall and Virtual Load Balancer. Select an
option according to the network requirements.
ï‚¡ Source from: Options include VNF Resources and L4-L7 Physical Resource Pool.
Select an option according to the network requirements.
ï‚¡ Resource Pool: Select an existing resource pool according to the network requirements.
Figure 9 Adding a border gateway member
4. Click Apply. On the page for adding a border gateway, click Apply.
0
Configure basic OpenStack settings
Before performing configuration tasks in this chapter, first install the HPE IMC Orchestrator
OpenStack plug-in in OpenStack to enable the controller to interoperate with OpenStack. For how
to install the HPE IMC Orchestrator OpenStack plug-in, see HPE IMC Orchestrator Converged
OpenStack Plug-Ins Installation Guide.
Perform the following tasks when OpenStack interoperates with the controller for the first time.
Add OpenStack nodes on the controller
1. Navigate to the Automation > Data Center Networks > Virtual Networking > OpenStack
page. Click Add. The page for adding an OpenStack control node opens, as shown in Figure
10.
Configure the parameters as follows:
ï‚¡ After adding a node, you cannot modify its name, and the node name must be the same
as the value for the cloud_region_name field in the plug-in configuration file
/etc/neutron/plugins/ml2/ml2_conf.ini.
ï‚¡ The VNI range must be the same as the value for the vni_ranges field in the
/etc/neutron/plugins/ml2/ml2_conf.ini file. The VNI ranges for different OpenStack
nodes cannot overlap.
ï‚¡ If the controller nodes operate in HA mode, you must add the addresses of all controller
nodes rather than only the cluster VIP. If the keystone service is shared, you must add two
OpenStack nodes.
ï‚¡ You must install the websocket-client toolkit on the controller nodes.
Figure 10 Adding OpenStack nodes on the controller side
2. Click the Parameter Settings tab. Configure parameters. For how to configure these
parameters, see the controller help or HPE IMC Orchestrator Converged OpenStack Plug-Ins
Installation Guide.
1
Figure 11 OpenStack parameter settings
NOTE:
• When the tenant border gateway policy is configured as matching the border gateway
name in the egress settings, you must enter the border gateway name correctly.
• In the network fail-open scenario, you must enable the DHCP agent, and as a best
practice, configure the network node access policy as No Access.
• When the network node access policy is configured as VLAN, the DHCP-type vPorts will
be activated on the controller (come online in hierarchical mode), and the packets will be
encapsulated as VLAN packets when they are sent out of network nodes. When the
network node access policy is configured as VXLAN, the DHCP-type vPorts are not
activated on the controller, and the packets are encapsulated as VXLAN packets when
they are sent out of the network nodes. When a network node accesses a DR system,
only the VLAN mode is supported.
• If a network node can act as a DHCP server in the network, you must disable the function
of sending DHCP packets to the controller in the vNetwork settings on the controller side.
As a best practice, use the controller as the DHCP server (enable the function of sending
DHCP packets to the controller).
Configure the default VDS and VXLAN pool
This task is required only for interoperating with the controller for the first time.
In the current software version, only one VDS is supported.
To configure the default VDS and VXLAN pool:
1. Navigate to the Automation > Data Center Networks > Common Network Settings >
Virtual Distributed Switch page. Click Settings. In the dialog box that opens, select the
system VDS.
2
Figure 12 Setting a VDS
2. Navigate to the Automation > Data Center Networks > Resource Pools > VNID Pools >
VXLANs page. Click Add. The page for adding VXLANs opens. Add a VXLAN pool.
Figure 13 Configuring VXLAN pools
NOTE:
Create a VXLAN pool for uniformly managing the L3VNI segments of user vRouters.
Configure the default cloud platform
This task is required only when OpenStack interoperates with the controller for the non-first time and
the previous plug-in configuration items do not have the multi-cloud attribute.
To configure the default cloud platform:
1. Navigate to the Automation > Data Center Networks > Virtual Networking > OpenStack
page. Click Default Cloud Platform. In the dialog box that opens, select the default cloud
platform.
Figure 14 Setting the default cloud platform
3
Configure a hybrid overlay network
The campus access devices do not support bare metal access.
Network planning
Network topology
Figure 15 OpenStack hybrid overlay network diagram
A hybrid overlay network includes a host overlay network and network overlay network. The ARP
proxy function is not supported on the host overlay side. The LB VIP and members cannot be on the
same subnet. The client and VIP cannot be on the same subnet.
On a host overlay network, the vSwitches on OpenStack compute nodes act as VTEPs and establish
VXLAN tunnels to other VTEPs for forwarding VXLAN packets. On a host overlay network, you need
the vBGP component to provide the IBGP routing function. The vBGP component is deployed on the
IMC PLAT server. The IMC PLAT server is connected to the management switch through an
aggregate interface, and the management switch is connected to Spine1 and Spine2. An IBGP
neighbor is established between the vBGP component and RR to synchronize route information.
On a network overlay network, leaf switches act as VTEPs and establish VXLAN tunnels to other
VTEPs, and the vSwitches on OpenStack compute nodes do not act as VTEPs or provide VXLAN
packet forwarding. The network overlay includes hierarchical onboarding and non-hierarchical
onboarding scenarios. In the non-hierarchical onboarding scenario, the controller deploys VXLAN
configuration to leaf devices based on VLAN-VXLAN mappings. In the hierarchical onboarding
Internet
Border1 Border2
Leaf1 Leaf2 Leaf3 Leaf4
IPL
IPL IPL
Spine1 Spine2
Leaf5 IPL Leaf6
FW2 LB1 LB2
FW1
OpenStack
Compute Node 1
(Host overlay)
OpenStack
Compute Node 2
(Host overlay)
OpenStack
Compute Node 3
(Network overlay)
OpenStack
Compute Node 4
(Network overlay)
OpenStack
Compute Node 5
(Hierarchical network
overlay)
OpenStack
Compute Node 6
(Hierarchical network
overlay)
OpenStack
Related
components
Managem
ent switch
vBGP1 vBGP2
IMC Orchestrator
4
scenario, the controller deploys VXLAN configuration to leaf devices based on VXLAN-VLAN
mappings on different compute nodes dynamically negotiated by OpenStack and the controller.
For the connections between switching devices, see IMC Orchestrator 6.2 Underlay Network
Configuration Guide. For the connections between devices and OpenStack compute nodes, see
Table 2.
Table 2 Device IP addresses and interfaces on the network
Device
Purpose
Management
IP
Service IP and interface
Spine1
Underlay
physical
device
192.168.11.2
Loopback0 10.1.1.2/32
XGE6/0/20 (connecting to XGE1/0/5
on management switch)
Spine2
Underlay
physical
device
192.168.11.3
Loopback0 10.1.1.3/32
XGE6/0/20 (connecting to XGE1/0/6
on management switch)
vBGP node 1
vBGP
192.168.13.4
vBGP service aggregate interface on
the physical server of vBGP node 1
(connecting to BAGG4 on
management switch, with member
ports XGE1/0/3 and XGE1/0/4)
vBGP node 2
vBGP
192.168.13.3
vBGP service aggregate interface on
the physical server of vBGP node 2
(connecting to BAGG5 on
management switch, with member
ports XGE1/0/5 and XGE1/0/6)
Leaf1
Access switch
192.168.11.31
XGE1/0/40 (connecting to ens1f0 on
OpenStack compute node 1)
XGE1/0/41 (connecting to ens1f0 on
OpenStack compute node 2)
Leaf2
Access switch
192.168.11.32
XGE1/0/40 (connecting to ens1f1 on
OpenStack compute node 1)
XGE1/0/41 (connecting to ens1f1 on
OpenStack compute node 2)
Leaf3
Access switch
192.168.11.33
XGE1/0/42 (connecting to ens1f0 on
OpenStack compute node 3)
XGE1/0/43 (connecting to ens1f0 on
OpenStack compute node 4)
Leaf4
Access switch
192.168.11.34
XGE1/0/42 (connecting to ens1f1 on
OpenStack compute node 3)
XGE1/0/43 (connecting to ens1f1 on
OpenStack compute node 4)
Leaf5
Access switch
192.168.11.35
XGE1/0/44 (connecting to ens1f0 on
OpenStack compute node 3)
XGE1/0/45 (connecting to ens1f0 on
OpenStack compute node 4)
Leaf6
Access switch
192.168.11.36
XGE1/0/44 (connecting to ens1f1 on
OpenStack compute node 3)
XGE1/0/45 (connecting to ens1f1 on
OpenStack compute node 4)
Openstack compute node 1
Host name: compute-host-
Compute
node
192.168.11.164
ens1f0 (connecting to XGE1/0/40 on
leaf1)
5
Device
Purpose
Management
IP
Service IP and interface
vxlan-1
ens1f1 (connecting to XGE1/0/40 on
leaf2)
Openstack compute node 2
Host name: compute-host-
vxlan-2
Compute
node
192.168.11.165
ens1f0 (connecting to XGE1/0/41 on
leaf1)
ens1f1 (connecting to XGE1/0/41 on
leaf2)
Openstack compute node 3
Host name: compute-host-
vxlan-3
Compute
node
192.168.11.166
ens1f0 (connecting to XGE1/0/42 on
leaf3)
ens1f1 (connecting to XGE1/0/42 on
leaf4)
Openstack compute node 4
Host name: compute-host-
vxlan-4
Compute
node
192.168.11.167
ens1f0 (connecting to XGE1/0/43 on
leaf3)
ens1f1 (connecting to XGE1/0/43 on
leaf4)
Openstack compute node 5
Host name: compute-host-
vxlan-5
Compute
node
192.168.11.168
ens1f0 (connecting to XGE1/0/44 on
leaf5)
ens1f1 (connecting to XGE1/0/44 on
leaf6)
Openstack compute node 6
Host name: compute-host-
vxlan-6
Compute
node
192.168.11.169
ens1f0 (connecting to XGE1/0/45 on
leaf5)
ens1f1 (connecting to XGE1/0/45 on
leaf6)
Resource plan
Table 3 Resource plan
Category
Data plan (Fabric1)
Remarks
vBGP service network
Subnet: 192.168.13.0/24
Gateway address:
192.168.13.1
vBGP cluster address:
192.168.13.2
vBGP node 1 address:
192.168.13.4
vBGP node 2 address:
192.168.13.3
Spine1 node address:
192.168.13.5
Spine2 node address:
192.168.13.6
N/A
BGP AS number
100
N/A
Host overlay
VXLAN ID range
20121 to 20221
N/A
Network overlay
Non-hierarchical
onboarding
VLAN ID range
2000 to 2120
N/A
VXLAN ID range
2000 to 2120
N/A
Network overlay
VLAN ID range
2121 to 2221
N/A
6
Category
Data plan (Fabric1)
Remarks
Hierarchical
onboarding
VXLAN ID range
20121 to 20221
Share the VXLAN ID range with
the host overlay
Tenant network (network overlay+non-
hierarchical)
11.1.1.0/24, gateway:
11.1.1.1
Network name: network101
Subnet name: subnet1
VLAN ID: Automatically
allocated
N/A
Tenant network (host overlay and
network overlay+hierarchical)
11.1.2.0/24, gateway:
11.1.2.1
Network name: network102
Subnet name: subnet2
VXLAN ID: Automatically
allocated
N/A
Tenant router (security egress)
VXLAN ID: Automatically
allocated
Router name: router102
N/A
Tenant router (directly connected to
external network)
VXLAN ID: Automatically
allocated
Router name:
router101_DMZ
N/A
External network1 (security egress)
21.1.1.0/24, gateway:
21.1.1.1
Network name:
extnetwork101
Subnet name: extsubnet1
VLAN ID: 4000
N/A
External network (direct egress)
100.0.1.0/24, gateway:
100.0.1.1
Network name:
extnetwork102
Subnet name: extsubnet2
VLAN ID: 4001
N/A
7
Deployment workflow
Figure 16 Deployment workflow
Among the vBGP, host overlay, and network overlay configurations, if you only need to configure
OpenStack+network overlay, you can skip the vBGP and host overlay configuration procedures. If
you only need to configure OpenStack+host overlay, you can skip the network overlay configuration
procedure.
When you configure the network with direct egress and the network with security egress, configure
the network according to the actual network conditions.
Procedure
Configure vBGP
You must configure vBGP for the hybrid overlay network. You do not need to configure vBGP for the
network overlay network.
Deploy the vBGP component
For how to deploy the vBGP component, see HPE IMC Orchestrator Installation Guide (IMC PLAT).
Preconfigure the underlay
Preconfigure the management switch
1. Create the management VLAN.
[DC-MGMT] vlan 22
[DC-MGMT-Vlan22] quit
2. Configure the interfaces for the service interface corresponding to vBGP on the physical
server of vBGP node 1.
[DC-MGMT] interface Bridge-Aggregation4
[DC-MGMT-Bridge-Aggregation4] link-aggregation mode dynamic
[DC-MGMT-Bridge-Aggregation4] quit
[DC-MGMT] interface Ten-GigabitEthernet1/0/3
[DC-MGMT-Ten-GigabitEthernet1/0/3] port link-aggregation group 4
[DC-MGMT-Ten-GigabitEthernet1/0/3] interface Ten-GigabitEthernet1/0/4
[DC-MGMT-Ten-GigabitEthernet1/0/4] port link-aggregation group 4
[DC-MGMT-Ten-GigabitEthernet1/0/4] interface Bridge-Aggregation4
[DC-MGMT--Bridge-Aggregation4] port access vlan 22
[DC-MGMT--Bridge-Aggregation4] quit
Configure basic underlay network
Settings Configure basic OpenStack
settings
Configure basic controller Settings
Deploy the vBGP component
Preconfigure the underlay
End
Required sub-process
Required main process
Configure the vBGP cluster
Configure vBGP instances
Configure vBGP
Start Configure basic security device
Settings
Add fabrics
Configure VDSs
Configure global parameters
Configure security service
resources
Add border device groups
Add tenants
Add border gateways
Configure host overlay Configure network overlay (select
hierarchical or non-hierarchical
onboarding)
Configure the network with direct
egress
Configure leaf devices
Add domains and hosts
Configure the OpenStack
controller node
Configure OpenStack compute
nodes
Create OpenStack instances
Onboard vPorts
Configure leaf devices
Configure critical settings on the
SeerEngine -DC controller
Configure the OpenStack
controller node
Configure OpenStack compute
nodes
Create OpenStack instances
Onboard vPorts
Configure OpenStack settings
Verify south-north traffic
Configure the network with
security egress
Configure OpenStack settings
Verify south-north traffic
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68
  • Page 69 69
  • Page 70 70
  • Page 71 71

Aruba JL851AAE Configuration Guide

Category
Networking
Type
Configuration Guide
This manual is also suitable for

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI