Aruba JL850AAE Configuration Guide

Category
Software
Type
Configuration Guide
i
IMC Orchestrator 6.2 Container Computing
Configuration Guide
The information in this document is subject to change without notice.
© Copyright 2022 Hewlett Packard Enterprise Development LP
i
Contents
Overview ························································································1
Configure basic controller settings ·······················································1
Log in to the controller··················································································································· 1
Configure fabrics ·························································································································· 1
Configure a VDS ·························································································································· 2
Configure global settings ··············································································································· 3
Add a border device group ············································································································· 4
Add a tenant ······························································································································· 5
Configure interoperability between the CNI plug-in and the controller ··········1
Restrictions and guidelines ············································································································ 1
Network planning ························································································································· 1
Network topology ·················································································································· 1
Resource plan ······················································································································ 3
Deployment workflow ···················································································································· 4
Procedure ··································································································································· 4
Configure the controller ·········································································································· 4
Configure a worker node ······································································································· 11
Configure the master node ···································································································· 13
Install the CNI plug-in ··········································································································· 18
Verify the configuration ··············································································································· 23
Create a pod for service verification ························································································ 24
Verify static IP address configuration ······················································································ 27
Verify static IP address pool configuration ················································································ 30
Verify communication between pods at Layer 2 ········································································· 32
Verify communication between pods at Layer 3 ········································································· 33
Verify the security group feature ····························································································· 34
Verify the QoS feature ·········································································································· 36
Verify the NetworkPolicy feature ····························································································· 37
Verify access from a pod to anther pod in the same cluster ························································· 39
Service access methods ······································································································· 39
Verify access to the DNS service ···························································································· 40
Verify the nodeport service ···································································································· 41
Configure the K8s Calico network ······················································ 43
Restrictions and guidelines ·········································································································· 43
Network planning ······················································································································· 43
Network topology ················································································································ 43
Resource plan ···················································································································· 44
Deployment workflow ·················································································································· 45
Procedure ································································································································· 45
Configure basic settings for the underlay network ······································································ 45
Configure basic Calico environment settings ············································································ 45
Configure BGP settings for the Calico network ·········································································· 45
Configure basic controller settings ·························································································· 48
Add a VLAN-VXLAN mapping ································································································ 48
Add a vNetwork ·················································································································· 49
Create a vRouter ················································································································· 51
Verify the configuration ··············································································································· 54
View the deployed configuration ····························································································· 54
Verify the NIC status on the controller ····················································································· 55
View the BGP peer state on the leaf switch ·············································································· 55
Verify service access ··········································································································· 56
O&M and monitoring ·········································································1
1
Overview
This document describes interoperation between the controller and the CNI plug-in, and between
the controller and the Calico CNI plug-in.
After the CNI plug-in is installed, the Pods can onboard to the controller and communicate
with each other at Layer 2 and Layer 3. The Pods can provide security group, QoS, and
network policy features. The container IP address can be a static IP, an IP address in a static
IP address pool, or an IP address in a DHCP pool on the controller. Cluster IP, Node Port, and
DNS services are supported.
After the Calico CNI plug-in is installed, the plug-in establishes a BGP peer relationship with
leaf switches in the data center for communications between Pods through EVPN at Layer 2
and Layer 3.
1
Configure basic controller settings
Log in to the controller
After you deploy the controller, the related menus will be displayed on IMC PLAT. You can use
controller features after logging in to IMC PLAT.
To log in to IMC PLAT, enter http://cluster northbound IP:30000/central/index.html in the address
bar, and then press Enter.
ucenter_ip_address is the virtual IP address for northbound services provided by the Installer
cluster where IMC PLAT is deployed.
30000 is the port number.
Figure 1 Logging in to IMC PLAT
Configure fabrics
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page. Click Add
to create a fabric.
Specify a name for the fabric. In this example, the name is fabric1.
Specify an AS number. Make sure the AS number is the same as the BGP AS number of
the devices in the fabric. In this example, the AS number is 100.
Multicast network is disabled by default.
End point group (EPG) controller is disabled by default.
Configure other parameters as needed. In this example, the default settings are used.
2
Figure 2 Configuring fabrics
2. Click OK.
3. Click in the Actions column for the fabric, and then click the Advanced Settings tab. As
a best practice to reduce packet flooding, select Suppress Unknown Unicast, Suppress
Unknown Multicast, and Suppress Unknown Broadcast.
4. Configure other parameters as needed. In this example, the default settings are used.
Figure 3 Configuring advanced settings
Configure a VDS
1. Navigate to the Automation > Data Center Networks > Common Network Settings >
Virtual Distributed Switches page. Edit the virtual distributed switch VDS1.
2. Click Add Fabric, select fabric1 in the dialog box that opens, and then click Apply.
3
Figure 4 Adding a fabric
3. Click the Advanced Settings tab to configure advanced settings for VDS 1.
Specify the bridge name as vds1-br.
Specify the name of the VXLAN tunnel interface as vxlan_vds1-br.
Set the vSwitch learned flow entries aging time to 300 seconds.
Configure other parameters as needed. In this example, the default settings are used.
Figure 5 Configuring advanced settings
Configure global settings
If IPv6 services are running on the network, enable IPv6 globally for IPv6 services to run correctly.
To configure global settings:
1. Navigate to the Automation > Data Center Networks > Fabrics > Parameters page, and
then click the Controller Global Settings tab.
2. Enable IPv6.
3. Select Off for Deploy Security Policy Flow Table to Switching Devices.
4
4. For VRF names to be automatically generated based on rules, select Rule-Based for VRF
Auto-Naming Mode. The generated VRF name is in the format of tenant name_router
name_Segment ID.
Figure 6 Configuring controller global settings
5. For bond4 access from a node, configure the following settings:
a. Navigate to the Automation > Data Center Networks > Resource Pools > Devices
page. Identify the access device that connects to the CNI component and then click the
icon for that device.
b. Click the OpenFlow tab, and then select Yes for the Sent Aggregation Group Member
Interface to the Controller item.
c. Click Apply.
Figure 7 Enable sending of aggregation group member interface to the controller
Add a border device group
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page. Click
for fabric1, and then click the Border Device Groups tab.
5
Figure 8 Adding a border device group
2. Click Add, and configure the parameters as follows:
Specify the device group name as bdgroup1.
Select False for the remote device group parameter. This parameter is not editable once
configured.
Select Border Gateway as the network position. This parameter is not editable once
configured.
Select DRNI as the HA mode.
3. In the Boarder Gateway Settings area, configure the following parameters:
Use the default setting for the third-party firewall parameter.
Use the default setting for the firewall deployment mode parameter.
Use the default setting for the connection mode parameter. This parameter is not editable
once configured.
Select the default address pool.
Select the default VLAN.
4. Add the border gateway to the group.
5. Click Apply.
Add a tenant
1. Navigate to the Automation > Data Center Networks > Tenant Management > All Tenants
page. Click Add, and then configure the following parameters:
Specify the tenant name as tenant1.
Specify the VDS name as VDS1.
6
Figure 9 Adding a tenant
2. Click Apply.
3. Click Details to obtain the UUID of the tenant. For interoperation with a Calico network, you
do not need to obtain the UUID.
Figure 10 Obtaining the UUID
1
Configure interoperability between the
CNI plug-in and the controller
Restrictions and guidelines
The controller can interoperate with multiple Kubernetes container clouds. You must follow
these restrictions and guidelines:
The hostname for the Nodes on multiple Kubernetes container clouds cannot be the
same.
Multiple Kubernetes container clouds cannot share the same vNetwork if static IPs are
used.
Change the cluster IP corresponding to kube-dns in the coredns.yaml file to the address
shipped with K8s before you install the DNS plug-in.
a. Display the cluster IP address for kube-dns.
[root@k8s-master140 yml]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
Calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP
32d
kube-dns ClusterIP 10.96.0.10 <none>
53/UDP,53/TCP,9153/TCP 2h
kubernetes-dashboard NodePort 10.110.95.160 <none>
443:30437/TCP 33d
sdnc-net-master ClusterIP 10.105.188.157 <none> 9797/TCP
5d
Modification finished.
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.96. 0.10
b. Delete the svc of the kube-dns component. If the component is not deleted, DNS access
flapping will occur.
1
Network planning
Network topology
Figure 11 Network diagram
The K8s cluster contains one master node and two worker nodes, without any bare metal server
deployed. Plan the network as follows:
K8s master node—Configure one bond interface as the management network.
K8s work node 1—Configure two bond interfaces.
One bond interface is used as the management network. A management address is
required.
The other bond interface is used as the service network. No IP address is required.
K8s worker node 2Configure two bond interfaces.
One bond interface is used as the management network. A management address is
required.
The other bond interface is used as the service network. No IP address is required.
In the test environment, three nodes are available. One node is the master. Only a management
address is required, and the management interface onboards to the controller. The other two nodes
are configured with a management address and a service address, respectively. The management
address onboards on the controller.
Table 1 IP and interface description
Device
Type
Management
IP
Service IP and interface
Border 1
EVPN border device
192.168.11.8
Loopback0 10.1.1.8/32
HGE4/0/1 (connects to HGE4/0/1 on
Border 2)
Border2
Spine2
Border1
Leaf1 Leaf2
Node1 Node2
IPL
IPL
Spine1
IMC Orchestrator
Components
Master
Management Management Management
Service Service
2
Device
Type
Management
IP
Service IP and interface
HGE4/0/2 (connects to HGE4/0/2 on
Border 2)
XGE6/0/48 (connects to XGE6/0/48 on
Border 2)
HGE4/0/3 (connects to HGE1/0/3 on
Spine 1)
HGE4/0/4 (connects to HGE1/0/3 on
Spine 2)
Border 2
EVPN border device
192.168.11.9
Loopback0 10.1.1.9/32
HGE4/0/1 (connects to HGE4/0/1 on
Boarder 1)
HGE4/0/2 (connects to HGE4/0/2 on
Boarder 1)
XGE6/0/48 (connects to XGE6/0/48 on
Boarder 1)
HGE4/0/3 (connects to HGE1/0/4 on
Spine 1)
HGE4/0/4 (connects to HGE1/0/4 on
Spine 1)
Spine 1
Underlay physical
device
192.168.11.2
Loopback0 10.1.1.2/32
HGE1/0/3 (connects to HGE4/0/3 on
Boarder 1)
HGE1/0/4 (connects to HGE4/0/3 on
Boarder 2)
HGE1/0/5 (connects to HGE1/0/25 on
Leaf 1)
HGE1/0/6 (connects to HGE1/0/25 on
Leaf 2)
Spine 2
Underlay physical
device
192.168.11.3
Loopback0 10.1.1.3/32
HGE1/0/3 (connects to HGE4/0/4 on
Boarder 1)
HGE1/0/4 (connects to HGE4/0/4 on
Boarder 2)
HGE1/0/5 (connects to HGE1/0/27 on
Leaf 1)
HGE1/0/6 (connects to HGE1/0/27 on
Leaf 2)
Leaf 1
EVPN access device
192.168.11.4
Loopback0 10.1.1.4/32
XGE1/0/9 (connects to XGE1/0/9 on Leaf
2)
XGE1/0/10 (connects to XGE1/0/10 on
Leaf 2)
HGE1/0/30 (connects to HGE1/0/30 on
Leaf 2)
HGE1/0/25 (connects to HGE1/0/5 on
Spine 1)
HGE1/0/27 (connects to HGE1/0/5 on
Spine 2)
XGE1/0/1 (BAGG1, connects to master's
enp9s0f0)
XGE1/0/2 (BAGG2, connects to Node 1's
enp9s0f0)
XGE1/0/3 (BAGG3, connects to Node 1's
enp9s0f0)
3
Device
Type
Management
IP
Service IP and interface
XGE1/0/4 (BAGG4, connects to Node 2's
enp9s0f0)
XGE1/0/5 (BAGG5, connects to Node 2's
enp9s0f0)
Leaf 2
EVPN access device
192.168.11.5
Loopback0 10.1.1.5/32
XGE1/0/9 (connects to XGE1/0/9 on Leaf
1)
XGE1/0/10 (connects to XGE1/0/10 on
Leaf 1)
HGE1/0/30 (connects to HGE1/0/30 on
Leaf 1)
HGE1/0/25 (connects to HGE1/0/6 on
Spine 1)
HGE1/0/27 (connects to HGE1/0/6 on
Spine 2)
XGE1/0/1 (BAGG1, connects to Master's
enp9s0f1)
XGE1/0/2 (BAGG2, connects to Node 1's
enp9s0f1)
XGE1/0/3 (BAGG3, connects to Node 1's
enp9s0f3)
XGE1/0/4 (BAGG4, connects to Node 2's
enp9s0f1)
XGE1/0/5 (BAGG5, connects to Node 2's
enp9s0f3)
Master
K8s master node
11.29.2.2
bond0 (enp9s0f0 and enp9s0f1, mode4,
management)
Node 1
K8s worker node
11.29.2.3
bond0 (enp9s0f0 and enp9s0f1, mode4,
management)
bond1 (enp9s0f2 and enp9s0f3, mode4,
service)
Node 2
K8s worker node
11.29.2.4
bond0 (enp9s0f0 and enp9s0f1, mode4,
management)
bond1 (enp9s0f2 and enp9s0f3, mode4,
service)
Resource plan
Table 2 Resource plan
Resource
Remarks
Fabric
N/A
VDS
The VXLAN ID range must contain
VXLAN IDs for all subnets in the
VDS. A VXLAN ID is unique in a
LAN. You cannot configure the
same VXLAN ID for different VDSs.
VLAN-VXLAN
mapping
The management interface
onboards through static mapping.
Make sure the 2114 is not in the
VLAN range specified for the plug-
4
Resource
Remarks
in.
Deployment workflow
Figure 12 Deployment workflow
Procedure
Configure the controller
Configure basic settings for the underlay network
Configure and incorporate switching devices in the network. For more information, see IMC
Orchestrator 6.2 Solution Underlay Network Configuration Guide.
Configure basic controller settings
See “Configure basic controller settings.”
Configure a boarder gateway
1. Navigate to the Automation > Data Center Networks > Public Network > Boarder
Gateways page. Click Add to create a border gateway named gw1 of the composite type.
Figure 13 Creating a boarder gateway
2. Click Add Gateway Member, and then configuration the following parameters:
Specify the name as gw1member.
Configure basic underlay network
Settings Basic controller settings
BGP settings for the Calico
network End
Required sub-process
Required main process
Tenant network settings
Start Basic Calico environment settings
Configure bgppeer.yaml
Configure node.yaml
Configure bgpconfiguration.yaml
Add fabrics
Configure a VDS
Configure controller global
settings
Add a border device group
Add a tenant
Add a VLAN-VXLAN mapping
Add a virtual network
Add a virtual router
5
Select fabric fabric1.
Select device group bdgroup1.
Select priority 1.
3. Click Apply.
Figure 14 Adding a boarder gateway member
4. Click Apply.
Bind a tenant to a border gateway
1. Navigate to the Automation > Data Center Networks > Tenant Management > All Tenants
page. Click in the Actions column for tenant1.
2. Click Add in the Allocate Gateway Resource area, and then select gw1 in the dialog box
that opens. Click Apply.
Figure 15 Selecting a boarder gateway for a tenant
Add a vNetwork
Configure the following vNetworks and subnets as required:
vNetwork
name
Segment ID
Subnet
Remarks
network2901
2113
IP version: IPv4.
Used for pod services.
6
vNetwork
name
Segment ID
Subnet
Remarks
DHCP: Enable DHCP.
Name: subnetv4-2901.
Subnet: 11.29.1.0/24.
Gateway IP: 11.19.1.1
DHCP address pool:
11.29.1.2,11.29.1.100. IP
addresses not in the
DHCP address pool can
be used as static IP
addresses and addresses
in static IP address pools.
IP version: IPv6.
DHCP: Enable DHCP.
Name: subnetv6-2901.
Subnet:
2001:11:29:1::/24.
Gateway IP:
2001:11:29:1::1.
IPv6 mode: Select as
required. The default
setting is used in this
example.
DHCP address pool:
2001:11:29:1::2,
2001:11:29:1::100. IP
addresses not in the
DHCP address pool can
be used as static IP
addresses and addresses
in static IP address pools.
network2902
2114
IP version: IPv4.
DHCP: Enable DHCP.
Name: subnetv4-2902.
Subnet: 11.29.2.0/24.
Gateway IP: 11.29.2.1
Management network, for
the management interfaces
on the master and worker
nodes to onboard to the
controller.
network2903
2115
IP version: IPv4.
DHCP: Enable DHCP.
Name: subnetv4-2903.
Subnet: 11.29.3.0/24.
Gateway IP: 11.29.3.1
cviOVSNode network
If the
node_port_net_id
parameter is specified
in the plug-in
installation file, you
must add this network
and obtain the UUID
of the network.
network2904
2116
IP version: IPv4.
DHCP: Enable DHCP.
Name: subnetv4-2904.
Subnet: 11.29.4.0/24.
Gateway IP: 11.29.4.1
Default network for
the default address
pool.
If the
default_network_id
parameter is specified
in the plug-in
installation file, you
must add this network
and obtain the UUID
of the network.
7
In the following example, a vNetwork named network2901, with a subnet of subnetv4-2901 is created.
1. Navigate to the Automation > Data Center Networks > Tenant Management > All Tenants
page. Click the name of tenant1.
2. Click Add. Specify the vNetwork name as network2091, specify the type as VXLAN, and
specify the segment ID as 2113.
Figure 16 Adding a vNetwork
3. On the Subnets tab, click Add, and then perform the following tasks:
Specify the IP version as IPv4.
Enable DHCP.
Specify the vNetwork name as subnetv4-2901.
Specify the subnet as 11.29.1.0/24
and gateway IP as 11.29.1.1.
Add 11.29.1.2 and 11.29.1.100 to the DHCP address pool. IP addresses not in the DHCP
address pool can be used as static IP addresses and addresses in static IP address
pools.
Figure 17 Adding an IPv4 subnet
8
4. Click Apply.
5. Click the Advanced Configuration tab. You can configure parameters such as packet
suppression. In this example, the default settings are used.
6. Click Apply.
7. Obtain the UUID of the network.
Figure 18 Obtaining the UUID of cviOVSNode
Figure 19 Obtaining the UUID of the default network
Configure a VLAN-VXLAN mapping
1. Navigate to the Automation > Data Center Networks > Resource Pools > VNID Pools >
VLAN-VXLAN Mappings page. On the Mapping Rules tab, perform the following tasks:
Specify the name as map2901.
Click Add Mapping, configuration the following parameters, and then click Apply.
Specify both the start VLAN and start VXLAN as 2114. The start VXLAN ID is the
segment ID of the VXLAN.
Specify the mapping range length as 1. The mapping range specifies the VLAN ID
range that can be contained in the packets sent by the VMs or physical devices to be
onboarded to the controller.
Specify the access mode as VLAN.
If you specify the access mode as VLAN, the Ethernet frames received by and sent to
the local site must contain VLAN tags.
If you specify the access mode as Ethernet, the Ethernet frames received by and sent
to the local site are not required to contain VLAN tags.
2. Click Apply.
Onboard a management port
1. Navigate to the Automation > Data Center Networks > Tenant tenant1 Network >
vPorts > vPorts page. Click Add to create three vPorts.
2. Specify the name as master, specify the virtual network/external network as network2902,
and specify the IP as 11.29.2.2.
9
Figure 20 Creating vPorts
After sending ARP packets, the VMs or physical devices connected to the interfaces that
have been bound to VLAN-VXLAN mappings will automatically onboard to the controller.
3. Navigate to the Automation > Data Center Networks > Tenant tenant1 Network > Virtual
Port > Virtual Port page to view the onboarded vPorts.
Figure 21 Onboarded management ports
Add a vRouter
1. Navigate to the Automation > Data Center Networks > Tenant tenant1 Network > Virtual
Router > Virtual Router page. Click Add.
Specify the name as router2901, and segment ID as 11113. The value for the segment ID
parameter must be within the VXLAN ID range for the VDS.
On the Subnets tab, click Add to add a subnet for the vRouter. Select the configured
subnet.
Configure other parameters as needed. In this example, the default settings are used.
10
Figure 22 Adding a vRouter
2. Click Apply.
3. On the virtual router list, click the gateway link, and then bind a gateway resource to the
virtual router. If you have set the gateway resource as the default gateway, you do not need
to bind a gateway resource to the virtual router.
Add a security policy
1. Navigate to the Automation > Data Center Networks > Tenant tenant1 Network > Virtual
Port > Security Policy page. Click Add.
Specify the name as securityrule1.
Enable IP-MAC anti-spoofing and set the empty rule action to permit.
Figure 23 Adding a security policy
2. Click the Details icon in the Actions column for securityrule1 to obtain the UUID of the
security policy.
Figure 24 Obtaining the UUID of a security policy
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68

Aruba JL850AAE Configuration Guide

Category
Software
Type
Configuration Guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI