Aruba JL849AAE User guide

Category
Networking
Type
User guide

This manual is also suitable for

i
HPE IMC Orchestrator 6.3
IMC PLAT and Components Deployment
Guide
The information in this document is subject to change without notice.
© Copyright 2023 Hewlett Packard Enterprise Development LP
i
Contents
Overview ························································································1
Components ······························································································································· 1
Deployment workflow ···················································································································· 1
Hardware resource requirements ·························································2
Plan network configuration ·································································3
Standalone deployment ················································································································· 3
About standalone deployment ·································································································· 3
IP address plan ····················································································································· 3
Cluster separated deployment ········································································································ 4
About cluster separated deployment ························································································· 4
IP address plan ····················································································································· 4
Cluster converged deployment ······································································································· 5
About cluster converged deployment ························································································ 5
IP address plan ····················································································································· 5
Deploy the components ·····································································7
Deploy IMC PLAT ························································································································ 7
Partition the drive ·················································································································· 7
Deploy the IMC PLAT application packages ··············································································· 9
Deploy IMC Orchestrator and vBGP ······························································································ 11
Deploy IMC Orchestrator Analyzer and collectors ············································································· 11
Operations monitoring ····································································· 13
Appendix A Obtain documentation ····················································· 14
1
Overview
Components
This document describes the deployment procedures for the following IMC Orchestrator
components:
IMC PLAT—Platform component on which you deploy controllers and analyzers.
IMC Orchestrator—Controller component that runs SDN applications. It controls resources on
the network and is the network management center.
IMC Orchestrator Analyzer—Analyzer component that provides intelligent analysis for IMC
Orchestrator solutions.
vBGP—Virtual BGP component that provides conversion between host overlay flow tables and
network overlay EVPN routes. It is typically used in hybrid overlay scenarios.
Collector—Collects and sends traffic to IMC Orchestrator Analyzer.
Deployment workflow
Figure 1 Deployment workflow
Deploy IMC PLATPlan network configuration Deploy
IMC Orchestrator ,vBGP
Deploy IMC Orchestrator Deploy vBGP
End
Deploy IMC Orchestrator Analyzer
and collectors
Required main process
Optional main process
Required sub-process
Optional sub-process
Start
2
Hardware resource requirements
For the server hardware resources required to install the components, see HPE Solution Hardware
Configuration Guide.
3
Plan network configuration
The following information uses examples to describe how to plan network configuration for IMC
PLAT and all its components. You can select components for your deployment scenario as required.
The vBGP component supports both the management network and service network converged
scheme and management network and service network separated scheme. This document uses the
management network and service network converged scheme for the controller for example and the
southbound and northbound integrated network (no southbound network) scheme for the analyzer
for example.
This section plans only for the management network configuration. For service network
configuration, see the network configuration plan for the specific deployment scenario.
The IP address plan in this section is for illustration only.
Standalone deployment
About standalone deployment
The vBGP component is not supported in the standalone deployment scenario.
In a standalone deployment, you deploy IMC Orchestrator and IMC Orchestrator Analyzer on
separate IMC PLAT nodes, and deploy the collector on a server other than the IMC PLAT nodes.
Connect all the IMC PLAT nodes to the management switch and configure the management switch
to provide gateway services.
IP address plan
Component
IP address
type
IP addresses
Remarks
IMC PLAT 1
(on which IMC
Orchestrator is
deployed)
Master node IP
192.168.10.102/24
These IP addresses must be on
the same network segment.
Specify gateway 192.168.10.1,
which is placed on the
management switch.
Cluster internal
VIP
192.168.10.101/32
Northbound
service VIP
192.168.10.100/32
IMC PLAT 2
(on which IMC
Orchestrator
Analyzer is
deployed)
Master node IP
192.168.10.202/24
These IP addresses must be on
the same network segment.
Specify gateway 192.168.10.1,
which is placed on the
management switch.
Cluster internal
VIP
192.168.10.201/32
Northbound
service VIP
192.168.10.200/32
Collector
Management IP
192.168.10.50/24
Specify gateway 192.168.10.1,
which is placed on the
management switch.
Southbound
collection IP
11.1.1.0/24
You need to configure two IP
addresses. Use one to receive
mirrored network device packets.
Use the other as the SeerCollector
float IP for it to be discovered by
devices.
4
Component
IP address
type
IP addresses
Remarks
IMC Orchestrator
Management
network
Subnet: 192.168.12.0/24
Network address pool:
192.168.12.101 to
192.168.12.132
Configure a MACVLAN-type
management network. Specify
gateway 192.168.12.1, which is
placed on the management switch.
IMC Orchestrator
Analyzer
Management
network
Subnet: 192.168.12.0/24
Network address pool:
192.168.12.141 to
192.168.12.172
Configure a MACVLAN-type
management network. Specify
gateway 192.168.12.1, which is
placed on the management switch.
Cluster separated deployment
About cluster separated deployment
In a cluster separated deployment, you deploy IMC Orchestrator and IMC Orchestrator Analyzer
(analyzer) in separate Uniform Platform clusters, and deploy the collector on a separate physical
server.
This document uses the following cluster separated deployment for example:
IMC PLAT cluster 1—Contains three master nodes and one worker node for IMC Orchestrator
deployment. vBGP is deployed on two master nodes.
IMC PLAT cluster 2—Contains three master nodes for IMC Orchestrator Analyzer
deployment.
Separate server—Used for collector deployment.
Connect all these servers to the management switch and configure the management switch to
provide gateway services. You must use a network interface on each of the two servers on which
vBGP is deployed for configuring a management network for vBGP.
The IMC PLAT cluster for IMC Orchestrator can span one subnet or multiple subnets. For high
availability, you can deploy the master nodes in 2+1+1 mode. For information about address
planning and available deployment modes, see HPE IMC Orchestrator Installation Guide. In this
deployment example, the nodes are deployed on the same subnet.
IP address plan
Component
IP addresses
Remarks
IMC PLAT
cluster 1
(on which IMC
Orchestrator is
deployed)
192.168.10.102/24
In this example, all the IP
addresses are on the same
subnet.
Specify gateway
192.168.10.1, which is placed
on the management switch.
192.168.10.103/24
192.168.10.104/24
192.168.10.101/32
192.168.10.100/32
IMC PLAT
cluster 2
(on which IMC
Orchestrator
Analyzer is
deployed)
192.168.10.202/24
These IP addresses must be
on the same network segment.
Specify gateway
192.168.10.1, which is placed
on the management switch.
192.168.10.203/24
192.168.10.204/24
5
Component
IP addresses
Remarks
192.168.10.201/32
192.168.10.200/32
Collector
192.168.10.50/24
Specify gateway
192.168.10.1, which is placed
on the management switch.
11.1.1.0/24
You need to configure two IP
addresses. Use one to receive
mirrored network device
packets. Use the other as the
SeerCollector float IP for it to
be discovered by devices.
IMC
Orchestrator
Subnet: 192.168.12.0/24
Network address pool:
192.168.12.101 to
192.168.12.132
Configure a MACVLAN-type
management network. Specify
gateway 192.168.12.1, which
is placed on the management
switch.
vBGP
Subnet: 192.168.13.0/24
Network address pool:
192.168.13.101 to
192.168.13.132
Configure OVSDPDK
networks. Specify gateway
192.168.13.1, which is placed
on the management switch.
IMC
Orchestrator
Analyzer
Subnet: 192.168.12.0/24
Network address pool:
192.168.12.141 to
192.168.12.172
Configure a MACVLAN-type
management network. Specify
gateway 192.168.10.1, which
is placed on the management
switch.
Cluster converged deployment
About cluster converged deployment
In a cluster converged deployment, you deploy IMC Orchestrator and IMC Orchestrator Analyzer on
one IMC PLAT cluster.
This document uses the following cluster converged deployment for example:
Deploy a IMC PLAT cluster that has three master nodes and four worker nodes.
Deploy IMC Orchestrator on the three master nodes.
Deploy IMC Orchestrator Analyzer on three of the worker nodes.
Deploy vBGP on two of the master nodes.
Deploy collectors on servers outside of the cluster. In this example, one collector is deployed.
Connect all these servers to the management switch and configure the management switch to
provide gateway services. You must use a network interface on each of the two servers on which
vBGP is deployed for configuring a management network for vBGP.
IP address plan
Component
IP address type
IP addresses
Remarks
IMC PLAT
Master node 1 IP
192.168.10.102/24
These IP addresses must
6
Component
IP address type
IP addresses
Remarks
Master node 2 IP
192.168.10.103/24
be on the same network
segment.
Specify gateway
192.168.10.1 for the
component, which is
placed on the
management switch.
Master node 3 IP
192.168.10.104/24
Worker node 4 IP
(IMC Orchestrator
Analyzer)
192.168.10.202/24
These IP addresses must
be on the same network
segment.
Specify gateway
192.168.10.1, which is
placed on the
management switch.
Worker node 5 IP
(IMC Orchestrator
Analyzer)
192.168.10.203/24
Worker node 6 IP
(IMC Orchestrator
Analyzer)
192.168.10.204/24
Cluster internal VIP
192.168.10.101/32
Northbound service
VIP
192.168.10.100/32
Collector
Management IP
192.168.10.50/24
Specify gateway
192.168.10.1, which is
placed on the
management switch.
Southbound
collection IP
11.1.1.0/24
You need to configure two
IP addresses. Use one to
receive mirrored network
device packets. Use the
other as the SeerCollector
float IP for it to be
discovered by devices.
IMC Orchestrator
Management
network
Subnet: 192.168.12.0/24
Network address pool:
192.168.12.101 to 192.168.12.132
Configure a
MACVLAN-type
management network.
Specify gateway
192.168.12.1 for the
component, which is
placed on the
management switch.
vBGP
Management
network
Subnet: 192.168.13.0/24
Network address pool:
192.168.13.101 to 192.168.13.132
Configure an
OVSDPDK-type
management network.
Specify gateway
192.168.13.1, which is
placed on the
management switch.
Node management
network
Network address pool:
192.168.10.110 to 192.168.10.120
N/A
IMC Orchestrator
Analyzer
Management
network
Subnet: 192.168.12.0/24
Network address pool:
192.168.12.141 to 192.168.12.172
Configure a
MACVLAN-type
management network.
Specify gateway
192.168.10.1, which is
placed on the
management switch.
7
Deploy the components
Deploy IMC PLAT
For the IMC PLAT deployment procedure, see HPE IMC PLAT Deployment Guide. For information
about how to obtain this document, see "Appendix A Obtain documentation."
To run the controller on IMC PLAT, partition the system drive and deploy all required application
installation packages required by the controller.
Partition the drive
Partition the drive on the node where a controller resides
Use the following table to partition the drive. Do not use automatic partitioning for the drive.
Table 1 Drive partition settings (2400 GB)
RAID
configuration
Partition
name
Mounting
point
Minimum
capacity
Remarks
RAID 10, with a
total drive size
2400 GB
/dev/sda1
/boot/efi
200 MiB
EFI system partition, required
only in UEFI mode.
/dev/sda2
/boot
1024 MiB
N/A
/dev/sda3
/
900 GiB
Supports capacity expansion
when the drive size is sufficient.
/dev/sda4
/var/lib/docker
460 GiB
Supports capacity expansion
when the drive size is sufficient.
/dev/sda6
swap
1024 MiB
Swap partition.
/dev/sda7
/var/lib/ssdata
520 GiB
Supports capacity expansion
when the drive size is sufficient.
/dev/sda8
N/A
300 GiB
Reserved for GlusterFS. You do
not need to configure this
partition during operating system
installation.
RAID 1, with a total
drive size 50 GB
/dev/sdb
/var/lib/etcd
50 GiB
For a controller in a version
earlier than E6203 deployed
on IMC PLAT in a version
earlier than E0706
(including E06xx), make
sure the etcd partition has
exclusive use of a physical
drive.
For a controller in E6203 or
later deployed on IMC PLAT
in E0706 or later, the etcd
partition can share a
physical drive with other
partitions. As a best practice
for optimal performance,
use a separate drive for the
etcd partition.
8
Table 2 Drive partition settings (1920 GB)
RAID
configuration
Partition
name
Mounting
point
Minimum
capacity
Remarks
RAID 10, with a
total drive size
1920 GB
/dev/sda1
/boot/efi
200 MiB
EFI system partition.
This partition is required only in
UEFI mode.
/dev/sda2
/boot
1024MiB
N/A
/dev/sda3
/
650 GiB
Supports capacity expansion
when the drive size is sufficient.
/dev/sda4
/var/lib/docker
410 GiB
Supports capacity expansion
when the drive size is sufficient.
/dev/sda6
swap
1024 MiB
Swap partition
/dev/sda7
/var/lib/ssdata
450 GiB
Supports capacity expansion
when the drive size is sufficient
/dev/sda8
N/A
220 GiB
Reserved for GlusterFS. You do
not need to configure this
partition during operating system
installation.
RAID 1, with a total
drive size 50 GB
/dev/sdb
/var/lib/etcd
50 GiB
For a controller in a version
earlier than E6203 deployed
on IMC PLAT in a version
earlier than E0706
(including E06xx), make
sure the etcd partition has
exclusive use of a physical
drive.
For a controller in E6203 or
later deployed on IMC PLAT
in E0706 or later, the etcd
partition can share a
physical drive with other
partitions. As a best practice
for optimal performance,
use a separate drive for the
etcd partition.
Partition the drives on the node where an analyzer resides
System drive
Table 3 System drive partition settings
RAID
configuration
Partition
name
Mounting
point
Minimum
capacity
Remarks
2*1.92 TB,
RAID 1
/dev/sda1
/boot/efi
200 MB
EFI system partition.
This partition is required only in
UEFI mode.
/dev/sda2
/boot
1024 MB
N/A
/dev/sda3
/
400 GB
Supports capacity expansion
when the drive size is sufficient.
As a best practice, do not save
service data in the root directory.
/dev/sda4
/var/lib/docker
400 GB
Supports capacity expansion
9
RAID
configuration
Partition
name
Mounting
point
Minimum
capacity
Remarks
when the drive size is sufficient.
/dev/sda6
swap
4 GB
Swap partition.
/dev/sda7
/var/lib/ssdata
450 GB
Supports capacity expansion
when the drive size is sufficient.
/dev/sda8
N/A
500 GB
Reserved for GlusterFS. You do
not need to configure this
partition during operating system
installation.
2*50 GB, RAID
1
/dev/sdb
/var/lib/etcd
50 GB
Make sure etcd has exclusive
use of a physical disk.
Data drives
Data drives are mainly used to store IMC Orchestrator Analyzer's service data and Kafka data.
The required quantity and capacity of data drives depend on the network scale and amount of
service data.
Table 4 Data drive partition configuration (option 1)
RAID
configuration
Partition
name
Mounting point
Minimum
capacity
File system type
3*4 TB, RAID 5
/dev/sdc1
/sa_data
400 GB
ext4
/dev/sdc2
/sa_data/mpp_data
4800 GB
ext4
/dev/sdc3
/sa_data/kafka_data
2400 GB
ext4
Table 5 Data drive partition configuration (option 2)
RAID
configuration
Partition
name
Mounting point
Minimum
capacity
File system
type
5*4 TB, RAID 5
/dev/sdc1
/sa_data
400 GB
ext4
/dev/sdc2
/sa_data/mpp_data
9600 GB
ext4
/dev/sdc3
/sa_data/kafka_data
4800 GB
ext4
Table 6 Data drive partition configuration (option 3)
RAID
configuration
Partition name
Mounting point
Minimum
capacity
File system
type
7*4 TB, RAID 5
/dev/sdc1
/sa_data
400 GB
ext4
/dev/sdc2
/sa_data/mpp_data
14400 GB
ext4
/dev/sdc3
/sa_data/kafka_data
7200 GB
ext4
Deploy the IMC PLAT application packages
IMC PLAT can be installed on x86 server. Select the installation packages specific to the server type
as described in Table 7 and upload the selected packages. For the installation procedures of the
packages, see HPE IMC PLAT Deployment Guide.
10
The common_PLAT_GlusterFS_2.0, general_PLAT_portal_2.0, general_PLAT_kernel_2.0, and
general_PLAT_oneclickcheck_2.0 installation packages are required and must be deployed
during the IMC PLAT deployment process. For the deployment procedure, see "Deploying the
applications" in HPE IMC PLAT Deployment Guide.
The general_PLAT_kernel-base_2.0, general_PLAT_Dashboard_2.0, and
general_PLAT_widget_2.0 installation packages are required. They will be installed automatically
during the controller deployment process. You only need to upload the packages.
Table 7 Installation packages required by the controller
Installation package
Description
x86: common_PLAT_GlusterFS_2.0_version_x86.zip
Provides local shared storage
functionalities.
x86: general_PLAT_portal_2.0_version_x86.zip
Provides portal, unified authentication, user
management, service gateway, and help
center functionalities.
x86: general_PLAT_kernel_2.0_version_x86.zip
Provides access control, resource
identification, license, configuration center,
resource group, and log functionalities.
x86: general_PLAT_kernel-base_2.0_version_x86.zip
Provides alarm, access parameter
template, monitoring template, report,
email, and SMS forwarding functionalities.
x86: general_PLAT_Dashboard_2.0_version_x86.zip
Provides the dashboard framework.
x86: general_PLAT_widget_2.0_version_x86.zip
Provides dashboard widget management.
x86: general_PLAT_oneclickcheck_2.0_version_x86.zip
Provides one-click check.
Application packages required for the analyzer nodes
Multiple component packages are available for IMC Orchestrator Analyzer. Select the component
packages as needed.
Table 8 Installation packages required by analyzer nodes
Component
Installation package
Description
IMC PLAT
common_PLAT_GlusterFS_2
.0_<version>.zip
Provides local shared storage functionalities.
general_PLAT_portal_2.0_<v
ersion>.zip
Provides portal, unified authentication, user
management, service gateway, and help center
functionalities.
general_PLAT_kernel_2.0_<
version>.zip
Provides access control, resource identification,
license, configuration center, resource group, and log
functionalities.
general_PLAT_kernel-base_
2.0_<version>.zip
Provides alarm, access parameter template,
monitoring template, report, email, and SMS
forwarding functionalities.
general_PLAT_websocket_2.
0_<version>.zip
(Optional.) Provide WebSocket service.
This component is required only when the analyzer is
11
Component
Installation package
Description
deployed on the cloud.
general_PLAT_Dashboard_<
version>.zip
Dashboard framework.
general_PLAT_widget_2.0_<
version>.zip
Platform dashboard widget.
Analyzer-Collector_<version
>.zip
Install this package when deploying the analyzer.
general_PLAT_kernel-region
_2.0_<version>.zip
(Optional.) To use IOM proxy in the Analyzer-LGA
scenario, install this application.
IMC Orchestrator
Analyzer
Analyzer-Platform-<version>.
zip
Platform component package.
Analyzer-Telemetry-<version
>.zip
Telemetry component package.
Analyzer-NPA-<version>.zip
Network performance analysis component package.
Analyzer-AI-<version>.zip
AI-driven forecast component package.
Analyzer-Diagnosis-<version
>.zip
Diagnosis and analysis component package.
Analyzer-SLA-<version>.zip
Service quality analysis component package.
Analyzer-TCP-<version>.zip
TCP stream analysis component package.
Analyzer-WAN-<version>.zip
WAN application analysis component package.
Analyzer-User-<version>.zip
User analysis component package.
Analyzer-AV-<version>.zip
Audio and video analysis component package.
Analyzer-LGA-<version>.zip
Log analysis component package.
Analyzer-TRA-<version>.zip
Trace analytics component package.
Deploy IMC Orchestrator and vBGP
Before deploying IMC Orchestrator, make sure the general_PLAT_kernel-base_2.0,
general_PLAT_Dashboard_2.0, and general_PLAT_widget_2.0 installation packages have been
uploaded. These application packages will be automatically deployed during the IMC Orchestrator
deployment process.
For information about the IMC Orchestrator and vBGP deployment procedures and required
application installation packages, see HPE IMC Orchestrator Controller Installation Guide (IMC
PLAT). For information about how to obtain the document, see "Appendix A Obtain documentation".
As a best practice, use the non-RDRS scheme for this solution.
Deploy IMC Orchestrator Analyzer and collectors
For information about the IMC Orchestrator Analyzer deployment procedure and required
application installation packages, see HPE IMC Orchestrator Analyzer Deployment Guide. For
information about how to obtain the document, see "Appendix A Obtain documentation."
Follow these restrictions and guidelines when you deploy IMC Orchestrator Analyzer and collectors:
1. To plan the network configuration, see network planning for single-stack southbound network in
HPE IMC Orchestrator Analyzer Deployment Guide.
12
2. After switching devices are deployed, use an interface on each collector to connect to a leaf
device and change the NIC type to DPDK for traffic collection.
13
Operations monitoring
For information about operations monitoring for this solution, see IMC Orchestrator 6.3 Operations
Monitoring Configuration Guide.
14
Appendix A Obtain documentation
To obtain documentation for installing IMC PLAT and its components:
1. Select the product category and model and then obtain the document.
Use the following table to obtain the desired documents:
Product name
Manual name
Directory
IMC PLAT
HPE IMC PLAT Deployment
Guide-E0708
SDN/IMC Orchestrator/Install &
Upgrade
IMC Orchestrator
HPE IMC Orchestrator Installation Guide
(IMC PLAT )-E63xx
SDN/IMC Orchestrator/Install &
Upgrade
IMC Orchestrator
Analyzer
HPE IMC Orchestrator Analyzer
Deployment Guide-E63xx
SDN/IMC Orchestrator/Install &
Upgrade
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16

Aruba JL849AAE User guide

Category
Networking
Type
User guide
This manual is also suitable for

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI