Aruba JL853AAE Configuration Guide

Type
Configuration Guide

This manual is also suitable for

i
HPE IMC Orchestrator 6.3 OpenStack Cloud
Scenario Service Configuration Guide
The information in this document is subject to change without notice.
© Copyright 2023 Hewlett Packard Enterprise Development LP
i
Contents
Overview ························································································1
Configure basic underlay network settings ·············································2
Configure basic security device settings ················································3
Configure basic controller settings ·······················································0
Log in to the controller··················································································································· 0
Add a fabric ································································································································ 0
Configure a VDS ·························································································································· 2
Configure global parameters ·········································································································· 2
Add a device group ······················································································································ 3
Add a tenant ······························································································································· 4
Configure basic OpenStack settings ·····················································4
Configure OpenStack bare metal ·························································8
Network planning ························································································································· 8
Network topology ·················································································································· 8
Resource plan ······················································································································ 9
Deployment workflow ·················································································································· 10
Procedure ································································································································· 10
Configure the compute node ································································································· 10
Configure bonding interfaces on the ironic node ········································································ 11
Configure IMC Orchestrator settings ······················································································· 11
Configure OpenStack resources ····························································································· 13
Configure settings in the inspection phase for bare metal nodes ··················································· 15
Configure settings in the provisioning phase for bare metal nodes ················································ 20
Configure settings in the running phase for bare metal nodes ······················································ 22
O&M monitoring ············································································· 23
1
Overview
IMC Orchestrator supports the hybrid overlay scenario of interoperating with native OpenStack. The
scenario mainly implements the hybrid traffic model containing weak-control EVPN network overlay.
OpenStack provides rich versions. This configuration guide uses the Rocky version as an example
to describe the passthrough and secure incorporation network scenario models.
2
Configure basic underlay network
settings
Configure and incorporate switching devices on the network. For more information, see IMC
Orchestrator 6.3 Underlay Network Configuration Guide.
3
Configure basic security device settings
Configure security devices on the network as needed. For more information, see IMC Orchestrator
6.3 Security Service Resource Configuration Guide.
0
Configure basic controller settings
Log in to the controller
After the controller is deployed, the corresponding menus are loaded in the IMC PLAT. Log in to the
IMC PLAT to use the controller functions.
The IMC PLAT provides a friendly UI. To log in to the IMC PLAT:
1. In the address bar of the browser, enter the login address (the default is
http://ip_address:30000/central/index.html) of the IMC PLAT, and press Enter to enter the
login page.
The ip_address parameter specifies the northbound service VIP of the cluster of the
Installer where the IMC PLAT is installed.
30000 is the port number.
Figure 1 IMC PLAT login page
Add a fabric
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page. Click Add.
On the page that opens, specify the following fabric parameters:
Name: fabric1.
Overlay BGP AS Number: Required. The AS number must be the same as the BGP AS
number on devices in the fabric, for example, 100.
Specify the other parameters as needed. In this example, use the default settings.
1
Figure 2 Adding a fabric
2. Click OK.
3. After the fabric is created, click the icon in the Actions column for the fabric, and then click
the Settings tab. On this tab, you can configure advanced settings for the fabric according to
the network requirements. As a best practice to reduce packet flooding in the network, select
the Unknown Unicast Suppression, Unknown Multicast Suppression, and Broadcast
Suppression options. Use the default settings for the other parameters.
Figure 3 Advanced configuration for a fabric
2
Configure a VDS
1. Navigate to the Automation > Data Center Networks > Common Network Settings >
Virtual Distributed Switch page. Click the Edit icon in the Actions column for VDS1 to enter
the page for editing VDS1. Click the Carrier Fabric tab. On this tab, add fabric fabric1.
Figure 4 Adding a fabric to a VDS
2. Click the Advanced Settings tab. On this tab, configure advanced settings for VDS1.
Bridge Name: vds1-br.
VXLAN Tunnel Interface Name: vxlan_vds1-br.
vSwitch Learned Entries Aging Time (seconds): 300.
Specify the other parameters as needed. In this example, use the default settings.
Figure 5 Advanced settings
Configure global parameters
1. If IPv6 services exist in the network, to ensure proper operation of IPv6 services, you must
enable global IPv6 configuration on the controller as follows:
a. Navigate to the Automation > Data Center Networks > Fabrics > Parameters page.
Click the Controller Global Settings tab.
b. Select On for the IPv6 field.
2. Disable deploying security policy flow table to switching devices as follows:
3
a. Navigate to the Automation > Data Center Networks > Fabrics > Parameters page.
Click the Controller Global Settings tab.
b. Select Off for the Deploy Security Policy Flow Table to Switching Devices field.
3. To generate the VRF names according to rules, you must specify the global VRF autonaming
mode on the controller as follows:
a. Navigate to the Automation > Data Center Networks > Fabrics > Parameters page.
Click the Controller Global Settings tab.
b. Select Rule-Based for the VRF Autonaming Mode field. The generated VRF names are
in the format of tenant name_router name_segment ID.
Figure 6 Controller global settings
Add a device group
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page. Click the
icon in the Actions column for fabric1. Click the Device groups tab.
2. Click Add. On the page that opens, configure the following parameters:
Device Group Name: bdgroup1.
Position: Border Gateway.
HA Mode: DRNI.
Connection Mode: IPs from Different Networks.
Address Pools: Tenant Carrier FW Internal Network address pool 1, Tenant Carrier LB
Internal Network address pool 1, Virtual Management Network address pool 1.
VLANs: VLAN 1 of the tenant carrier network type.
4
Figure 7 Adding a device group
3. In the device group member area, add border devices to the device group.
4. Click Apply in the upper right corner.
Add a tenant
1. Navigate to the Automation > Data Center Networks > Tenant Management > All Tenants
page. Click Add. On the page that opens, configure the following parameters:
Tenant Name: tenant1.
VDS Name: VDS1.
Figure 8 Adding a tenant
2. Click Apply.
Configure basic OpenStack settings
Before performing configuration tasks in this chapter, first install the HPE IMC Orchestrator
OpenStack plug-in in OpenStack to enable the controller to interoperate with OpenStack. For how
to install the HPE IMC Orchestrator OpenStack plug-in, see HPE IMC Orchestrator Converged
OpenStack Plug-Ins Installation Guide.
5
NOTE:
If the IMC Orchestrator controller has interoperated with a non-converged OpenStack plug-in, see
the chapter about upgrading non-converged plug-ins to converged plug-ins in HPE IMC
Orchestrator Converged OpenStack Plug-Ins Installation Guide.
Perform the following tasks when OpenStack interoperates with the controller for the first time.
Add OpenStack nodes on the controller
1. Navigate to the Automation > Data Center Networks > Virtual Networking > OpenStack
page. Click Add. The page for adding an OpenStack control node opens, as shown in Figure
9.
Configure the parameters as follows:
After adding a node, you cannot modify its name, and the node name must be the same as
the value for the cloud_region_name field in the plug-in configuration file
/etc/neutron/plugins/ml2/ml2_conf.ini.
The VNI range must be the same as the value for the vni_ranges field in the
/etc/neutron/plugins/ml2/ml2_conf.ini file. The VNI ranges for different OpenStack
nodes cannot overlap.
If the controller nodes operate in HA mode, you must add the addresses of all controller
nodes rather than only the cluster VIP. If the keystone service is shared, you must add two
OpenStack nodes.
You must install the websocket-client toolkit on the controller nodes.
Figure 9 Adding OpenStack nodes on the controller side
2. Click the Parameter Settings tab. Configure parameters. For how to configure these
parameters, see the controller help or HPE IMC Orchestrator Converged OpenStack Plug-Ins
Installation Guide.
Figure 10 OpenStack parameter settings
6
NOTE:
When the tenant border gateway policy is configured as matching the border gateway
name in the egress settings, you must enter the border gateway name correctly.
In the network fail-open scenario, you must enable the DHCP agent, and as a best
practice, configure the network node access policy as No Access.
When the network node access policy is configured as VLAN, the DHCP-type vPorts will
be activated on the controller (come online in hierarchical mode), and the packets will be
encapsulated as VLAN packets when they are sent out of network nodes. When the
network node access policy is configured as VXLAN, the DHCP-type vPorts are not
activated on the controller, and the packets are encapsulated as VXLAN packets when
they are sent out of the network nodes. When a network node accesses a DR system, only
the VLAN mode is supported.
If a network node can act as a DHCP server in the network, you must disable the function
of sending DHCP packets to the controller in the vNetwork settings on the controller side.
As a best practice, use the controller as the DHCP server (enable the function of sending
DHCP packets to the controller).
Configure the default VDS and VXLAN pool
This task is required only for interoperating with the controller for the first time.
In the current software version, only one VDS is supported.
To configure the default VDS and VXLAN pool:
1. Navigate to the Automation > Data Center Networks > Common Network Settings >
Virtual Distributed Switch page. Click Settings. In the dialog box that opens, select the
system VDS.
Figure 11 Setting a VDS
2. Navigate to the Automation > Data Center Networks > Resource Pools > VNID Pools >
VXLANs page. Click Add. The page for adding VXLANs opens. Add a VXLAN pool.
Figure 12 Configuring VXLAN pools
NOTE:
Create a VXLAN pool for uniformly managing the L3VNI segments of user vRouters.
7
Configure the default cloud platform
To configure the default cloud platform:
1. Navigate to the Automation > Data Center Networks > Virtual Networking > OpenStack
page. Click Default Cloud Platform. In the dialog box that opens, select the default cloud
platform.
Figure 13 Setting the default cloud platform
8
Configure OpenStack bare metal
The campus access devices do not support bare metal access.
Network planning
Network topology
Figure 14 Bare metal network diagram
Remarks:
1. The OpenStack controller and Ironic are uniformly deployed, and are collectively referred to as
the controller node.
2. The server where the OpenStack Ironic node resides is connected to a DR system through
bonding interfaces, and provides services in the inspection phase and provisioning phase.
3. Leaf1 and Leaf2 form a DR system. Leaf3 and Leaf4 form a DR system.
4. Server1, Server2, Server3, and Server4 are bare metal servers.
Internet
Border1 Border2
Leaf1 Leaf2
BM
Server1 BM
Server2
Peer link
IPL
Spine1 Spine2
Leaf3 Leaf4
BM
Server3 BM
Server4
IPL
Controller
Related
components
Controller
Ironic
Compute
OpenStack
Controller node
Compute node
9
Resource plan
Physical line plan
Table 1 Physical line plan
Name
Remarks
Aggregate line connecting ironic
node to DR system
Accesses related services
through the main NIC in the
inspection phase
The interface on the peer
switch is a DR AC interface.
Accesses related services
through the sub-NIC in the
provisioning phase
VLAN/VXLAN plan
Table 2 VLAN/VXLAN plan
Name
Remarks
Working VLAN settings
N/A
Inspection VLAN/VXLAN mappings
N/A
Provisioning VLAN/VXLAN
mappings
N/A
Address pool plan
Table 3 Address pool plan
Name
Remarks
Inspection network
VXLAN network, with
VXLAN as 3000
Provisioning network
VXLAN network, with
VXLAN as 3001
Service internal network
VXLAN network, with
VXLAN as 3002
10
Deployment workflow
Figure 15 Deployment workflow
Procedure
In this document, the ironic node is uniformly deployed on the controller node. You can adjust the
ironic node location as needed. For how to install and configure OpenStack ironic, see the
documents on the OpenStack official website. After the bare metal servers are installed, the vPort
activation workflow method is common network overlay.
Configure the compute node
1. Log in to the back end of the compute node, and edit the nova.conf file.
# vim /etc/nova/nova.conf
[default]
compute_driver = ironic.IronicDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
ram_allocation_ratio = 1.0
reserved_host_memory_mb = 0
[scheduler]
host_manager = ironic_host_manager
[filter_scheduler]
use_baremetal_filters = true
track_instance_changes = false
[ironic]
auth_type = password
auth_url = http://controller:35357/v3
project_name = service
username = ironic
password = 123456
project_domain_name = Default
user_domain_name = Default
2. Restart the openstack-nova-compute service
Configure basic underlay network
Settings Configure basic OpenStack
settings
Configure basic controller settings End
Required sub-process
Required main process
Start Configure basic security device
Settings
Add fabrics
Configure VDSs
Configure global parameters
Configure security service
resources
Add border device groups
Add tenants
Add border gateways
Configure SeerEngine-DC
settings Configure OpenStack resources
Configure the access network
Configure the service network
Create an inspection network
Configure the provisioning VXLAN
network
Configure the service VXLAN
network
Deploy bare metal servers
Inspection phase for bare metal
nodes
Provisioning phase for bare metal
nodes
Running phase for bare metal
nodes
Configure Ironic node bonding
interfaces
Configure compute nodes
Configure M-LAG
11
# systemctl restart openstack-nova-compute
Configure bonding interfaces on the ironic node
1. Assign the inspection network gateway address to the main NIC of the bonding interface.
2. Assign the provisioning network gateway address to the VLAN subinterface on the bonding
interface.
3. Configure the NIC as follows:
# cat ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
TYPE=Bond
BONDING_MASTER=yes
BONDING_OPTS="mode=active-backup miimon=100"
TYPE=Ethernet
BOOTPROTO=none
NM_CONTROLLED=no
IPADDR=10.0.0.2
PREFIX=24
NOTE: The address must be the same as the ipa-inspection-callback-url address in
/tftpboot/pxelinux.cfg/default.
4. Create a subinterface, and specify the VLAN that is the same as the VXLAN in the provisioning
network.
# cat ifcfg-bond0.3001
DEVICE=bond0.3001
ONBOOT=yes
BOOTPROTO=none
VLAN=yes
TYPE=Bond
IPADDR=20.0.0.1
PREFIX=24
NOTE: The value for tftp_server of [pxe] is 20.0.0.1 in /etc/ironic/ironic.conf.
Configure IMC Orchestrator settings
Configure the access network
1. Navigate to the Automation > Data Center Networks > Fabrics > BM Server > Access
Network page. Configure the default access VLAN ID as 3000, and click Set.
Figure 16 Configuring the access network
2. Click Create Mapping Table. In the dialog box that opens, configure the following parameters:
Mapping Table Name: Inspect.
12
VLAN ID: 3000.
VXLAN ID: 3000.
3. Click Apply.
4. Click the link in the Applied to Interfaces column. In the dialog box that opens, select the DR
AC interface on the peer switch connected to the bare metal server, and set the mapping type
to inspection VLAN/VXLAN mapping.
Figure 17 Configuring a mapping table
Configure the service network
Add the inspection VLAN/VXLAN mapping and provisioning VLAN/VXLAN mapping, and apply the
mappings to the DR AC interface on the leaf device corresponding to interface bond0 on the
interface page.
Figure 18 Configuring VLAN-VXLAN mappings
Configure a DR system
1. Navigate to the Automation > Data Center Networks > Resource Pools > Distributed
Relay > DR Systems page. Click Add. On the page that opens, configure member devices
and the DR group ID range. When bare metal servers come online on a DRNI network, you
must configure the DR group ID range (which can be only expanded later and must be planned
in advance). The DRNI configuration can be deployed to aggregate interfaces on DR member
devices only when the VM aggregation mode is 802.3ad:4.
13
Create an inspection network
1. Navigate to the Automation > Data Center Networks > Tenant [default] Network > Virtual
Network. Click Add. On the page that opens, configure the following parameters:
Name: inspect.
Segment ID: 3000.
Specify the other parameters as needed. In this example, use the default settings.
2. Click Add on the Subnets tab. In the dialog box that opens, configure the following
parameters to add an IPv4 subnet:
IP Version: IPv4.
Name: inspect-sub.
Subnet Address: 10.0.0.0/24.
Gateway IP: 10.0.0.1.
Specify the other parameters as needed. In this example, use the default settings.
3. Click the Advanced Configuration tab. On this tab, disable ARP to Controller, RARP to
Controller, and DHCP to Controller, and enable vSwitch Flooding and Learning.
Figure 19 Configuring a vNetwork
NOTE:
You can also create the inspection network on the cloud to ensure that the segment ID can be
recognized by the cloud.
Configure OpenStack resources
Configure provisioning VXLAN network
1. Navigate to the Project > Network > Networks page. Click Create Network. In the dialog box
that opens, configure the following parameters:
Table 4 Configuring a VXLAN network
Parameter
Parameter value
Remarks
Network Name
Provision
N/A
VXLAN
3001
N/A
Other parameters
Use the default settings
N/A
2. Navigate to the Project > Network > Networks page. During the process of creating a
network, a subnet is created by default. On the page for creating the subnet, configure the
following parameters and click Created.
14
Table 5 Configuring a subnet
Parameter
Parameter value
Remarks
Subnet Name
ProvisionSub
N/A
Network Address
20.0.0.0/24
The gateway address is 20.0.
0.1
IP Version
IPv4
Use the default settings
Gateway IP
This parameter is optional.
By default, the first available
address of the subnet is used
as the gateway IP.
Other parameters
Do not configure or modify the
parameters
Use the default settings
Figure 20 Creating the provisioning network
3. Navigate to the Advanced Configuration tab for the provisioning network on the controller.
On this tab, enable ARP to Controller, RARP to Controller, and DHCP to Controller, and
disable vSwitch Flooding and Learning. Then, in the provisioning phase, the DHCP
requests are sent to the controller, and the controller acts as the DHCP server to manage IP
addresses.
Figure 21 Advanced Configuration
4. Update the values for the provisioning_network and cleaning_network parameters of the
[neutron] module in the /etc/ironic/ironic.conf file. Set the value to the UUID of the
corresponding network.
# vim /etc/ironic/ironic.conf
[neutron]
provisioning_network=d3bb5e8a-8264-44cc-93bc-2a1019546c79
cleaning_network=d3bb5e8a-8264-44cc-93bc-2a1019546c79
[pxe]
tftp_server = 20.0.0.1 # This IP is the IP address of the listening NIC of the TFTP
server
# systemctl enable openstack-ironic-api openstack-ironic-conductor
# systemctl start openstack-ironic-api openstack-ironic-conductor
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29

Aruba JL853AAE Configuration Guide

Type
Configuration Guide
This manual is also suitable for

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI