Aruba JL850AAE Configuration Guide

Category
Networking
Type
Configuration Guide

This manual is also suitable for

i
HPE IMC Orchestrator 6.3 DCI Configuration
Guide
The information in this document is subject to change without notice.
© Copyright 2023 Hewlett Packard Enterprise Development LP
i
Contents
Overview ························································································1
Configure controller-deployed DCI ·······················································3
Deployment workflows ·················································································································· 3
Network planning ························································································································· 4
Network topology ·················································································································· 4
Resource plan ······················································································································ 6
Procedure ··································································································································· 7
Configure underlay network basic settings ················································································· 7
Preconfigure basic multicast settings ······················································································ 11
Configure controller basic settings ·························································································· 12
Configure overlay Layer 2 interconnection ················································································ 18
Configure overlay Layer 3 interconnection without firewall traversal ·············································· 23
Configure Layer 2 multicast interconnection ············································································· 31
Configure Layer 3 multicast interconnection ············································································· 35
O&M monitoring ························································································································· 44
1
Overview
EVPN VXLAN DCI uses a three-hop VXLAN tunnel:
The first hop is from the local DC leaf device to the local DC ED device.
The second hop is from the local DC ED device to the remote DC ED device.
The third hop is from the remote DC ED device to the remote DC leaf device.
To enable overlay Layer 2 VXLAN interconnection, Layer 3 VXLAN interconnection without firewall
traversal, and Layer 3 VXLAN interconnection with firewall traversal, establish EBGP neighbor
relationships between EDs and IBGP neighbors between EDs and devices in the DC, and configure
VXLAN mapping, route reorigination, PBR and other technologies. In different DCs, the same
subnet of the same tenant might use different VXLANs. When these DCs are interconnected, to
ensure that traffic between the same tenant and the same subnet is forwarded at Layer 2, VXLAN
mapping needs to be performed on the EDs. Modify the RD, L3VNI, and RTs in EVPN routes by
using route reorigination to achieve DC interconnection without exposing the L3 VNIs of DCs.
Detailed configuration is as follows:
Overlay Layer 2 VLAN interconnection—DC 1 EDs and DC 2 EDs map their local tenant
VXLANs to the same intermediate VXLAN. The EDs of both DCs exchange type-2 EVPN
routes for service interconnection of the same network segment in different DCs.
Overlay Layer 3 VXLAN interconnection without firewall traversal
With route reduction disabled
VPN mapping is enabled on DC 1 EDs and DC 2 EDs. The import RTs of one VPN are the
export RTs of the other VPN, and vice versa. Route replication is performed on local tenant
VPNs to enable service interconnection between different network segments in different
DCs.
With route reduction enabled
VPN mapping is enabled on DC 1 EDs and DC 2 EDs. Static routes are configured.
Service interconnection between different network segments in different DCs is
implemented by configuring routes to local tenant VPNs without specifying next hop
addresses.
Figure 1 Network diagram
Dedicated EDs, border devices collocated with EDs, and spine and border devices collocated with
EDs are available. To ensure availability, the two EDs of each DC form a DR system. The above
figure shows the spine and border devices collocated with EDs.
Leaf1 Leaf2
Server1 Server2
IPL
Spine-
Border1 Spine-
Border2
Leaf3 Leaf4
Server3 Server4
IPL
Internet
Leaf1 Leaf2
Server1 Server2
IPL
Spine-
Border1 Spine-
Border2
Leaf3 Leaf4
Server3 Server4
IPL
DC1 DC2
DCI
Internet
2
Controller-deployed DCI—This solution offers DCI Layer 2 interconnection and DCI Layer 3
interconnection with or without firewall traversal. This solution requires pre-configuration on EDs
and deployment from the controller.
3
Configure controller-deployed DCI
If route reduction is configured, reserve one interface for the service loopback group feature on
each ED. Do not use those interfaces for any other purposes.
Deployment workflows
Overlay Layer 2 interconnect deployment workflow
Figure 2 Deployment workflow
Overlay Layer 3 interconnect without firewall traversal deployment workflow
Figure 3 Deployment workflow
Configure basic controller Settings
Configure basic underlay network
Settings Configure the tenant network
Add Layer 2 DC interconnect
Configure DCI End
Required sub-process
Required main process
Add a fabric
Configure a VDS
Add device groups
Add a border gateway
Configure a VLAN-VXLAN
mapping
Add a tenant
Add a vNetwork
Add a vRouter
Start
Optional sub-process
Configure basic controller Settings
Configure basic underlay network
Settings Configure the tenant network
Add Layer 3 DC interconnect
Configure DCI End
Required sub-process
Required main process
Add a fabric
Configure a VDS
Add device groups
Add a border gateway
Configure a VLAN-VXLAN
mapping
Add a tenant
Add a vNetwork
Add a vRouter
Start
Optional sub-process
4
Layer 2 multicast interconnection deployment workflow
Figure 4 Deployment workflow
Layer 3 multicast interconnection deployment workflow
Figure 5 Deployment workflow
Note: If firewalls are disabled, services might not be isolated. For example, there are two subnets
under a vRouter. Subnet A is configured with Layer 2 DCI, Subnet B is configured with Layer 3 DCI,
and peer DC Subnet C is configured with Layer 3 DCI to interconnect with Subnet B. In this
scenario, Subnet C can also communicate with Subnet A. They cannot be isolated.
Network planning
Three network models are supported: dedicated EDs, border devices collocated with EDs, and
spine and border devices collocated with EDs. The three network models have the same overlay
configuration with different underlay configurations.
Network topology
Dedicated EDs network diagram
The dedicated EDs model supports overlay Layer 2 interconnect scenario and overlay Layer 3
interconnect without firewall traversal scenarios, but does not support the overlay Layer 3
interconnect with firewall traversal scenario.
Configure basic security service
resource Settings
Configure basic underlay network
Settings Configure the tenant network
Configure basic controller Settings End
Required sub-process
Required main process
Start
Add a fabric
Configure a VDS
Add a device group
Add a border gateway
Add a VLAN-VXLAN mapping
Add a tenant and bind it to a
border gateway
Add a vNetwork
Add a vRouter
Configure DCI
Add a Layer 2 DC interconnect
Bind the vRouter to the border
gateway
Preconfigure basic multicast
settings
Configure basic security service
resource Settings
Configure basic underlay network
Settings Configure the tenant network
Configure basic controller Settings End
Required sub-process
Required main process
Start
Add a fabric
Configure a VDS
Add a device group
Add a border gateway
Add a VLAN-VXLAN mapping
Add a tenant and bind it to a
border gateway
Add a vNetwork
Add a vRouter
Configure DCI
Bind the vRouter to the border
gateway
Preconfigure basic multicast
settings
Add a Layer 3 DC interconnect
Configure Layer 3 multicast
5
Figure 6 Dedicated EDs
For the connection s between switching devices, see IMC Orchestrator 6.3 Underlay Network
Configuration Guide. For planning of the management and service IP addresses of the devices, see
Table 1.
Table 1 IP assignment
Device
Management IP
addresses
Border1 (DC 1)
192.168.11.8/24
Border2 (DC 1)
192.168.11.9/24
ED 1 (DC 1)
192.168.11.10/24
ED 2 (DC 1)
192.168.11.11/24
Spine 1 (DC 1)
192.168.11.2/24
Spine 2 (DC 1)
192.168.11.3/24
Leaf 1 (DC 1)
192.168.11.4/24
Leaf 2 (DC 1)
192.168.11.5/24
Leaf 3 (DC 1)
192.168.11.6/24
Leaf 4 (DC 1)
192.168.11.7/24
Border1 (DC 2)
192.168.21.8/24
Border2 (DC 2)
192.168.21.9/24
ED 1 (DC 2)
192.168.21.10/24
Internet
Border1 Border2
Leaf1 Leaf2 Leaf3 Leaf4
Server1 Server2 Server3 Server4
peer-link
peer-link peer-link
Spine1 Spine2
ED1 ED2
peer-link
Internet
Border1 Border2
Leaf1 Leaf2 Leaf3 Leaf4
Server1 Server2 Server3 Server4
peer-limk
peer-link peer-link
Spine1 Spine2
ED1 ED2
peer-link
DC1 DC2
ED1
DCI switch
6
Device
Management IP
addresses
ED 2 (DC 2)
192.168.21.11/24
Spine 1 (DC 2)
192.168.21.2/24
Spine 2 (DC 2)
192.168.21.3/24
Leaf 1 (DC 2)
192.168.21.4/24
Leaf 2 (DC 2)
192.168.21.5/24
Leaf 3 (DC 2)
192.168.21.6/24
Leaf 4 (DC 2)
192.168.21.7/24
Resource plan
Table 2 Resource plan
Resource
DC 1 configuration example
DC 2 configuration example
Device management network
Subnet: 192.168.11.0/24
Gateway: 192.168.11.1
Subnet: 192.168.21.0/24
Gateway: 192.168.21.1
IP
address
pool
DC interconnection
network
Name: DC interconnection
network 1
Subnets: 10.70.1.0/24;
2001:10:70:1::/112
Default address pool: Not
selected
Name: DC interconnection
network 2
Subnets: 10.70.10.0/24;
2001:10:70:10::/112
Default address pool: Not
selected
Tenant carrier LB
internal network
Name: Tenant carrier LB
internal network 1
10.50.1.0/24
2001::10:50:1:/112
Name: Tenant carrier LB
internal network 2
10.50.10.0/24
2001:10:50:10::/112
Tenant carrier FW
internal network
Name: Tenant carrier FW
internal network 1
Subnets: 10.60.1.0/24;
2001:10:60:1::/112
Default address pool: Not
selected
Name: Tenant carrier FW
internal network 2
Subnets: 10.60.10.0/24:
2001:10:60:10::/112
Default address pool: Not
selected
Virtual
management
network
Name: Virtual management
network 1
Subnets: 192.168.10.0/24
Gateway address:
192.168.10.1
Default address pool: Not
selected
Name: Virtual management
network 2
Subnets: 192.168.100.0/24
Gateway address:
192.168.100.1
Default address pool: Not
selected
VLAN
pool
Tenant carrier
network
Name: Tenant carrier VLAN 1
VLAN range: 500 - 999
Default VLAN pool: Not
selected
Name: Tenant carrier VLAN 2
VLAN range: 500 - 999
Default VLAN pool: Not
selected
Overlay
resources
Fabric
Name: Fabric 1
AS number: 100
Name: Fabric 2
AS number: 1000
7
Resource
DC 1 configuration example
DC 2 configuration example
VDS
Name: VDS 1
Carrier fabric: Fabric 1
VXLAN ID range:
1-16777215
Name: VDS 1
Carrier fabric: Fabric 2
VXLAN ID range: 1-16777215
vRouter
Layer 2 interconnect:
router2801
Layer 3 interconnect:
router2802
Layer 2 interconnect:
router2804
Layer 3 interconnect:
router2805
Subnet:
Layer 2 interconnect:
11.28.1.0/24;
2001:11:28:1::/64
Layer 3 interconnect:
11.28.2.0/24:
2001:11:28:2::/64
Layer 2 interconnect:
11.28.1.0/24;
2001:11:28:1::/64
Layer 3 interconnect:
11.28.3.0/24:
2001:11:28:3::/64
Procedure
Configure underlay network basic settings
Incorporate switching devices
Configure and incorporate switching devices in the network. For details, see IMC Orchestrator 6.3
Underlay Network Configuration Guide.
Underlay network basic configurations of EDs are the same as border devices.
Preconfigure DCI
Interconnection is required between the loopback interfaces that are used to establish BGP
neighbors between EDs of two DCs and between the DRNI virtual addresses. Static or dynamic
routing can be configured according to networking requirements to achieve interconnection. OSPF
is used as an example. The following steps are performed manually.
Preconfigure route reduction
Reserve interfaces to be added to service loopback groups based on actual traffic size.
[ED1] system-view
[ED1] service-loopback group 5 type inter-vpn-fwd
[ED1] interface Twenty-FiveGigE1/0/44
[ED1-Twenty-FiveGigE1/0/44] port link-mode bridge
[ED1-Twenty-FiveGigE1/0/44] port service-loopback group 5
Preconfigure DCI for dedicated ED devices
ED 1 of DC 1 is used as an example.
Configure OSPF.
[ED1] ospf 1
[ED1-ospf-1] non-stop-routing
[ED1-ospf-1] area 0.0.0.0
[ED1-ospf-1] quit
Configure the interfaces that connect DCI switch to the ED to enable Layer 3 interconnection
between the local and remote VTEPs.
[ED1] interface Ten-GigabitEthernet1/0/17
[ED1-Ten-GigabitEthernet1/0/17] port link-mode route
8
[ED1-Ten-GigabitEthernet1/0/17] ip address 12.1.1.1 255.255.255.252
[ED1-Ten-GigabitEthernet1/0/17] ospf network-type p2p
[ED1-Ten-GigabitEthernet1/0/17] ospf 1 area 0.0.0.0
[ED1-Ten-GigabitEthernet1/0/17] quit
Configure BGP:
The command descriptions are as follows:
peer ebgp as-number: 1000, AS number of remote DC.
peer 10.1.2.10 group ebgp: 10.1.2.10 is the VTEP IP address of ED 1 (DC 2) (real IP
address of the device as an DRNI member device).
peer 10.1.2.11 group ebgp: 10.1.2.11 is the VTEP IP address of ED 2 (DC 2) (real IP
address of the device as an DRNI member device).
peer ebgp route-policy SDN_POLICY_DCI_L3CONNECT export: the
SDN_POLICY_DCI_L3CONNECT name is fixed and when the Layer 3 DC interconnection
is created, the controller issues the route policy. Verify the configuration by executing the
display current-configuration configuration route-policy command on
the device.
peer{ group-name } re-originated [ imet | mac-ip ] replace-rt: Regenerate
the EVPN route.
[ED1] bgp 100
[ED1-bgp-default] group ebgp external
[ED1-bgp-default] peer ebgp as-number 1000
[ED1-bgp-default] peer ebgp connect-interface LoopBack0
[ED1-bgp-default] peer ebgpebgp-max-hop 64
[ED1-bgp-default] peer 10.1.2.10 group ebgp
[ED1-bgp-default] peer 10.1.2.11 group ebgp
[ED1-bgp-default] address-family l2vpn evpn
[ED1-bgp-default-evpn] peer ebgp enable
[ED1-bgp-default-evpn] peer ebgp route-policy SDN_POLICY_DCI_L3CONNECT export
[ED1-bgp-default-evpn] peer ebgp router-mac-local dci
[ED1-bgp-default-evpn] peer ebgp re-originated replace-rt
[ED1-bgp-default-evpn] peer ebgp re-originated mac-IP replace-rt
[ED1-bgp-default-evpn] peer ebgp re-originated imet replace-rt
[ED1-bgp-default-evpn] peer ebgp re-originated smet replace-rt
[ED1-bgp-default-evpn] peer ebgp re-originated s-pmsi replace-rt
[ED1-bgp-default-evpn] peer evpn re-originated replace-rt
[ED1-bgp-default-evpn] peer evpn re-originated mac-IP replace-rt
[ED1-bgp-default-evpn] peer evpn re-originated imet replace-rt
[ED1-bgp-default-evpn] peer evpn re-originated smet replace-rt
[ED1-bgp-default-evpn] peer evpn re-originated s-pmsi replace-rt
[ED1-bgp-default-evpn] quit
[ED1-bgp-default] quit
Preconfigure DCI on border devices collocated with EDs
The Border 1 device in DC 1 is used as an example.
Configure OSPF.
[border1] ospf 1
[border1-ospf-1] non-stop-routing
[border1-ospf-1] area 0.0.0.0
[border1-ospf-1] quit
9
Configure the interconnection interfaces between Border 1 and DCI switch to enable Layer 3
interconnect between the local and remote VTEPs.
[border1] interface Ten-GigabitEthernet1/0/17
[border1-Ten-GigabitEthernet1/0/17] port link-mode route
[border1-Ten-GigabitEthernet1/0/17] ip address 12.1.1.1 255.255.255.252
[border1-Ten-GigabitEthernet1/0/17] ospf network-type p2p
[border1-Ten-GigabitEthernet1/0/17] ospf 1 area 0.0.0.0
[border1-Ten-GigabitEthernet1/0/17] quit
Configure BGP:
The command descriptions are as follows:
peer ebgp as-number 1000: 1000, AS number of remote DC.
peer 10.1.2.8 group ebgp: 10.1.2.8 is the VTEP IP address of Border 1 (DC 2) (real IP
address of the device as an DRNI member device).
peer 10.1.2.9 group ebgp: 10.1.2.9 is the VTEP IP address of Border 2 (DC 2) (real IP
address of the device as an DRNI member device).
peer ebgp route-policy SDN_POLICY_DCI_L3CONNECT export: The
SDN_POLICY_DCI_L3CONNECT name is fixed and when the Layer 3 DC interconnection
is created, the controller issues the route policy. Verify the configuration by executing the
display current-configuration configuration route-policy command on
the device.
peer { group-name } re-originated [ imet | mac-ip ] replace-rt: Regenerate
the EVPN route.
[border1] bgp 100
[border1-bgp-default] group ebgp external
[border1-bgp-default] peer ebgp as-number 1000
[border1-bgp-default] peer ebgp connect-interface LoopBack0
[border1-bgp-default] peer ebgpebgp-max-hop 64
[border1-bgp-default] peer 10.1.2.8 group ebgp
[border1-bgp-default] peer 10.1.2.9 group ebgp
[border1-bgp-default] address-family l2vpn evpn
[border1-bgp-default-evpn] nexthopevpn-drni group-address
[border1-bgp-default-evpn] peer ebgp enable
[border1-bgp-default-evpn] peer ebgp route-policy SDN_POLICY_DCI_L3CONNECT export
[border1-bgp-default-evpn] peer ebgp router-mac-local dci
[border1-bgp-default-evpn] peer ebgp re-originated replace-rt
[border1-bgp-default-evpn] peer ebgp re-originated mac-IP replace-r
[border1-bgp-default-evpn] peer ebgp re-originated imet replace-rt
[border1-bgp-default-evpn] peer evpn re-originated replace-rt
[border1-bgp-default-evpn] peer evpn re-originated mac-IP replace-rt
[border1-bgp-default-evpn] peer evpn re-originated imet replace-rt
[border1-bgp-default-evpn] quit
[border1-bgp-default] quit
Configure a route policy:
Execute the route-policy SDN_PREDEF_deny_default command on the ED device to filter
the default routes to prevent loops. This route policy will not be restored by. The controller
uses this route policy when the controller creates a VPN on a border device at the creation of
overlay Layer 3 DC interconnect (with firewall traversal) with route reduction disabled.
[border1] ip prefix-list SDN_PREDEF_default index 10 permit 0.0.0.0 0
[border1] ipv6 prefix-list SDN_PREDEF_default index 10 permit :: 0
10
[border1] route-policy SDN_PREDEF_deny_default deny node 0
[border1-route-policy- DN_PREDEF_deny_default-0] if-match ip address prefix-list
SDN_PREDEF_default
[border1-route-policy-SDN_PREDEF_deny_default-0] if-match ipv6 address prefix-list
SDN_PREDEF_default
[border1-route-policy-SDN_PREDEF_deny_default-0] quit
[border1] route-policy SDN_PREDEF_deny_default permit node 1000
[border1-route-policy-SDN_PREDEF_deny_default-1000] quit
Assign the links between Border 1 and the firewalls to VLANs.
[border1] interface Bridge-Aggregation1
[border1-Bridge-Aggregation1] port trunk permitvlan 1 500 to 999
[border1-Bridge-Aggregation1] quit
[border1] interface Bridge-Aggregation2
[border1-Bridge-Aggregation2] port trunk permit vlan1 500 to 999
[border1-Bridge-Aggregation2] quit
Preconfigure DCI on spine and border devices collocated with EDs
Spine-Border 1 in DC 1 is used as an example.
Configure OSPF.
[spine-border1] ospf 1
[spine-border1-ospf-1] non-stop-routing
[spine-border1-ospf-1] area 0.0.0.0
[spine-border1-ospf-1] quit
Configure the ED connection interfaces
[spine-border1] interface Ten-GigabitEthernet1/0/17
[spine-border1-Ten-GigabitEthernet1/0/17] port link-mode route
[spine-border1-Ten-GigabitEthernet1/0/17] ip address 12.1.1.1 255.255.255.252
[spine-border1-Ten-GigabitEthernet1/0/17] ospf network-type p2p
[spine-border1-Ten-GigabitEthernet1/0/17] ospf 1 area 0.0.0.0
[spine-border1-Ten-GigabitEthernet1/0/17] quit
Configure BGP:
The command descriptions are as follows:
peer ebgp as-number 1000: 1000, AS number of remote DC.
peer 10.1.2.2 group ebgp: 10.1.2.2 is the VTEP IP address of Spine-Border 1 (DC 2)
(real IP address of the device as an DRNI member device).
peer 10.1.2.3 group ebgp: 10.1.2.3 is the VTEP IP address of Spine-Border 2 (DC 2)
(real IP address of the device as an DRNI member device).
peer ebgp route-policy SDN_POLICY_DCI_L3CONNECT export: The
SDN_POLICY_DCI_L3CONNECT name is fixed and when the Layer 3 DC interconnection is
created, the controller issues the route policy. Verify the configuration by executing the
display current-configuration configuration route-policy command on
the device.
peer { group-name } re-originated [ imet | mac-ip ] replace-rt: Regenerate
the EVPN route.
peer evpn advertise original-route: This command is required in spine and
border devices collocated with EDs.
[spine-border1] bgp 100
[spine-border1-bgp-default]groupebgp external
[spine-border1-bgp-default]peerebgp as-number 1000
11
[spine-border1-bgp-default]peerebgp connect-interface LoopBack0
[spine-border1-bgp-default]peerebgpebgp-max-hop 64
[spine-border1-bgp-default]peer 10.1.2.2 group ebgp
[spine-border1-bgp-default]peer 10.1.2.3 group ebgp
[spine-border1-bgp-default] address-family l2vpn evpn
[spine-border1-bgp-default-evpn] peer ebgp enable
[spine-border1-bgp-default-evpn] peer ebgp route-policy SDN_POLICY_DCI_L3CONNECT
export
[spine-border1-bgp-default-evpn] peer ebgp router-mac-local dci
[spine-border1-bgp-default-evpn] peer ebgp re-originated replace-rt
[spine-border1-bgp-default-evpn] peer ebgp re-originated mac-IP replace-r
[spine-border1-bgp-default-evpn] peer ebgp re-originated imet replace-rt
[spine-border1-bgp-default-evpn] peer evpn re-originated replace-rt
[spine-border1-bgp-default-evpn] peer evpn re-originated mac-IP replace-rt
[spine-border1-bgp-default-evpn] peer evpn re-originated imet replace-rt
[spine-border1-bgp-default-evpn] peer evpn advertise original-route
[spine-border1-bgp-default-evpn] quit
[spine-border1-bgp-default] quit
Configure a route policy:
Execute the route-policy SDN_PREDEF_deny_default command on the ED device to filter
the default routes to prevent loops. This route policy will not be restored by the controller. The
controller uses this route policy when the controller creates a VPN on a border device at the
creation of overlay Layer 3 DC interconnect (with firewall traversal) with route reduction
disabled.
[spine-border1] ip prefix-list SDN_PREDEF_default index 10 permit 0.0.0.0 0
[spine-border1] ipv6 prefix-list SDN_PREDEF_default index 10 permit :: 0
[spine-border1] route-policy SDN_PREDEF_deny_default deny node 0
[spine-border1-SDN_PREDEF_deny_default-0] if-match ip address prefix-list
SDN_PREDEF_default
[spine-border1-SDN_PREDEF_deny_default-0] if-match ipv6 address prefix-list
SDN_PREDEF_default
[spine-border1-SDN_PREDEF_deny_default-0] quit
[spine-border1] route-policy SDN_PREDEF_deny_default permit node 1000
[spine-border1-SDN_PREDEF_deny_default-1000] quit
Assign the links between Spine-Border 1 and the firewalls to VLANs.
[spine-border1] interface Bridge-Aggregation1
[spine-border1-Bridge-Aggregation1] port trunk permitvlan 1 500 to 999
[spine-border1-Bridge-Aggregation1] quit
[spine-border1] interface Bridge-Aggregation2
[spine-border1-Bridge-Aggregation2] port trunk permit vlan1 500 to 999
[spine-border1-Bridge-Aggregation2] quit
Preconfigure basic multicast settings
Preconfigure the EDs
[ED1] interface Ten-GigabitEthernet 1/0/17
[ED1-Ten-GigabitEthernet 1/0/17] pim sm
[ED2] interface Ten-GigabitEthernet 1/0/17
[ED2-Ten-GigabitEthernet 1/0/17] pim sm
12
Preconfigure the spine devices
[spine] multicast routing
Configure controller basic settings
This section only introduces the configuration procedures of basic settings. For specific
configuration data, see "Configure controller basic settings" in the chapter of each scenario.
Log in to the controller
After the controller is deployed, the corresponding menu will be loaded in IMC PLAT. You can use
the controller functions after logging in to IMC PLAT.
To log in to IMC PLAT:
Enter the login address to IMC PLAT (default login address:
http://ucenter_ip_address:30000/central/index.html) in the browser. Press Enter to open the login
page as shown in Figure 7.
Ucenter_ip_address: North-bound service virtual IP address of the Matrix cluster where IMC PLAT
locates.
In the login address, 30000 is the port number.
Figure 7 IMC PLAT login page
Add a fabric
Data center
Configuration example
DC1
Basic configuration
Name: fabric1
Overlay BGP AS number: 100
Advanced configuration
Suppress unknown unicast: Selected
Suppress unknown multicast: Selected
Suppress broadcast: selected
Multicast network: On (only in multicast scenarios)
DC2
Basic configuration
Name: fabric2
Overlay BGP AS number: 1000
Advanced configuration
13
Data center
Configuration example
Suppress unknown unicast: Selected
Suppress unknown multicast: Selected
Suppress broadcast: Selected
Multicast network: On (only in multicast scenarios)
Fabric 1 of DC 1 is used as an example.
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page, click Add to
add a fabric. Configure the following parameters:
Name: fabric1
Overlay BGP AS Number: 100
Configure other parameters according to network requirements. This step uses the default
settings.
Figure 8 Adding a fabric
2. Click OK to complete the fabric creation.
3. Click , click the Settings tab, and then select Suppress Unknown Unicast, Suppress
Unknown Multicast and Suppress Broadcast. Configure other advanced parameters
according to the network requirements. This step uses the default settings.
Figure 9 Configuring advanced fabric settings
14
Configure a VDS
Data center
Configuration example
DC1
Carrier fabric: fabric1
Advanced configuration:
Bridge name: vds1-br
VXLAN tunnel interface name: vxlan_vds1-br
vSwitch learned flow entries aging time (seconds): 300
DC2
Carrier fabric: fabric2
Advanced configuration:
Bridge name: vds2-br
VXLAN tunnel interface name: vxlan_vds2-br
vSwitch learned flow entries aging time (seconds): 300
VDS of DC 1 is used as an example.
1. Navigate to the Automation > Common Network Settings > Virtual Distributed Switch
page. Click the edit icon to modify VDS 1. Add the created fabric named fabric1 on the carrier
fabric tab.
Figure 10 Adding a fabric for the VDS
2. Click Advanced Settings to configure advanced settings for VDS 1:
Bridge Name: vds1-br
VXLAN Tunnel Interface Name: vxlan_vds1-br
vSwitch Learned Flow Entries Aging Time (seconds): 300
Configure other parameters according to network requirements. This step uses the default
settings.
15
Figure 11 Advanced configuration
Add a device group
Data center
Configuration example
DC1
Device group name: bdgroup1.
Fabric: fabric1
Position:
For dedicated EDs, only select DC Interconnection.
For border devices collocated with EDs and spine and border
devices collocated with EDs, select Border Gateway and DC
Interconnection.
HA mode: DRNI.
Connection mode: IPs from different networks
Address pool list: Select the planned custom addresses: The DC
interconnection network 1, the tenant carrier LB internal network 1, the
tenant carrier FW internal network 1, and the virtual management
network 1.
VLAN pool list: Tenant carrier VLAN 1.
DC2
Device group name: bdgroup3.
Fabric: fabric2
Position:
For dedicated EDs, only select DC Interconnection.
For border devices collocated with EDs and spine and border
devices collocated with EDs, select Border Gateway and DC
Interconnection.
HA mode: DRNI.
Connection mode: IPs from different networks
Address pool list: Select the planned custom addresses: The DC
interconnection network 2, the tenant carrier LB internal network 2, the
tenant carrier FW internal network 2, and the virtual management
network 2.
VLAN pool list: Tenant carrier VLAN 2.
The device group of DC 1 is used as an example.
1. Navigate to the Automation > Data Center Networks > Fabrics > Fabrics page. Click
for fabric1. Click the Device groups tab.
2. Click Add and configure the following parameters:
16
Basic Info
Device Group Name: bdgroup1.
Fabric: fabric1.
Position: For dedicated EDs, only select DC Interconnection. For border devices
collocated with EDs and spine and border devices collocated with EDs, select Border
Gateway and DC Interconnection.
HA Mode: DRNI.
Connection Mode: IPs from Different Networks.
Border Gateway Settings
Firewall Deployment Mode: Hairpin.
IP Address Pool List: Select the planned custom addresses: the DC interconnection
network 1, the tenant carrier LB internal network 1, the tenant carrier FW internal
network 1, and the virtual management network 1.
VLAN Pool List: Select the planned custom VLAN: Tenant carrier VLAN 1.
Bind device group members: Click Add Device, and select the created devices
Spine-Border 1 and Spine-Border 2 (take the spine and border devices collocated with EDs
as an example).
Figure 12 Adding a device group (spine and border devices collocated with EDs)
3. Click Apply in the top right corner to add the device group.
Add a border gateway
Data center
Configuration example
DC1
Name: gw 1
Gateway type: Composite gateway
Member of the border gateway:
Name: gw1member
Fabric: Fabric 1
Device group: bdgroup 1
Priority: 1
DC2
Name: gw2.
Gateway type: Composite gateway.
Member of the border gateway:
Name: gw2member
17
Data center
Configuration example
Fabric: Fabric 2
Device group: bdgroup 2
Priority: 1
The border gateway of DC 1 is used as an example.
1. Navigate to the Automation > Data Center Networks > Common Network Settings >
Border Gateways page. Click Add and configure the following settings:
Name: gw1
Border Gateway Type: Composite gateway.
2. Click Add Border Gateway Member to configure the following settings:
Name: gw1member
Fabric: fabric1.
Device Group: bdgroup 1.
Priority: 1
3. Click Apply to add the gateway member.
4. Click Apply in the top right corner to add the border gateway.
Configure a VLAN-VXLAN mapping
Data center
Configuration example
DC1
Name: map1.
VLAN-VXLAN mapping:
Name: map2801
Start VLAN ID: 2109.
Start VXLAN ID: 2109.
Mapping range length: 4.
Access mode: VLAN.
DC2
Name: map2.
VLAN-VXLAN mapping:
Name: map2802
Start VLAN ID: 2209.
Start VXLAN ID: 2209.
Mapping range length: 4.
Access mode: VLAN.
The VLAN-VXLAN mapping at DC 1 is used as an example.
1. Navigate to the Automation > Data Center Networks > Resource Pools > VNID Pools >
VLAN-VXLAN Mappings page. Click Add and select VLAN-VXLAN Mapping. Enter the
name map 1.
2. Click Add Mapping to configure the following parameters:
Name: map2801
Start VLAN ID: 2109.
Start VXLAN ID: 2109.
Mapping Range Length: 4.
Access Mode: VLAN.
18
Figure 13 Adding a VLAN-VXLAN mapping
3. Click Apply to add the mapping. On the Apply to Interface tab, apply the mapping to all
aggregate interfaces that connect the leaf devices to the server.
4. Click Apply to add the VLAN-VXLAN mapping table.
Add a tenant
Data center
Configuration example
DC1
Tenant name: tenant1.
VDS name: VDS1.
DC2
Tenant name: tenant2.
VDS name: VDS2.
The border gateway of DC 1 is used as an example.
1. Navigate to the Automation > Data Center Networks > Tenant Management > All Tenants
page. Click Add to configure the following parameters:
Tenant Name: tenant1.
VDS Name: VDS1.
2. Click Apply to add the tenant.
Configure overlay Layer 2 interconnection
Configure the tenant network
Data center
Configuration example
DC1
Name: network2801.
Type: VXLAN.
Segment ID: 2109.
Network sharing: Off.
IPv4 subnet:
IP version: IPv4.
Name: subnetv4-2801
Subnet: 11.28.1.0/24.
Gateway IP: 11.28.1.1.
IPv6 subnet:
IP version: IPv6.
DHCP: Off.
Name: subnetv6-2801
Subnet: 2001:11:28:1::/64.
Gateway IP: 2001:11:28:1::1
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47

Aruba JL850AAE Configuration Guide

Category
Networking
Type
Configuration Guide
This manual is also suitable for

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI