Juniper Contrail Networking User guide

Type
User guide
Data Center: Contrail Enterprise
Mulcloud for Fabric Management
Published
2023-07-31
Juniper Networks, Inc.
1133 Innovaon Way
Sunnyvale, California 94089
USA
408-745-2000
www.juniper.net
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc.
in the United States and other countries. All other trademarks, service marks, registered marks, or registered service
marks are the property of their respecve owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right
to change, modify, transfer, or otherwise revise this publicaon without noce.
Data Center: Contrail Enterprise Mulcloud for Fabric Management
Copyright © 2023 Juniper Networks, Inc. All rights reserved.
The informaon in this document is current as of the date on the tle page.
YEAR 2000 NOTICE
Juniper Networks hardware and soware products are Year 2000 compliant. Junos OS has no known me-related
limitaons through the year 2038. However, the NTP applicaon is known to have some diculty in the year 2036.
END USER LICENSE AGREEMENT
The Juniper Networks product that is the subject of this technical documentaon consists of (or is intended for use
with) Juniper Networks soware. Use of such soware is subject to the terms and condions of the End User License
Agreement ("EULA") posted at hps://support.juniper.net/support/eula/. By downloading, installing or using such
soware, you agree to the terms and condions of that EULA.
ii
Table of Contents
About This Guide | v
1
Data Center Fabric Management-Overview
About This Soluons Guide | 2
Terminology | 2
Contrail Enterprise Mulcloud Architecture | 6
Contrail Enterprise Mulcloud Components | 10
Contrail Command | 11
IP Fabrics | 12
Device Roles | 14
Virtual Networks | 16
Virtual Port Groups | 17
Logical Routers | 19
2
Data Center Fabric Management-Tested Implementaon
Data Center Fabric Design Overview and Validated Topology | 21
Create a Greeneld Deployment For a New Data Center Network | 23
Before You Begin | 24
Creang the New Fabric and Discovering Devices | 27
Assigning Roles to Devices | 31
Applying the Role-Based Conguraons to the New Fabric Devices | 31
Create a Browneld Deployment to Add an Overlay to an Exisng IP Fabric | 34
Before You Begin | 35
Adding an Exisng Underlay into a Fabric and Discovering Devices | 35
Selecve Onboarding of a Device | 39
Congure Virtual Networks for Mul-tenant Service Operaons | 43
iii
Create Virtual Networks | 44
Assign Interfaces to VLANs with Virtual Port Groups | 46
Enable Layer 3 Roung on Virtual Networks Using Logical Routers | 48
Verify Your Virtual Network Conguraon | 50
Congure Service Chaining With PNF | 55
Service Chaining Using a PNF | 55
Service Chaining Conguraon Overview | 56
Onboard an SRX Services Gateway as the PNF Device | 58
Assign Device Roles for the PNF Device | 59
Create a PNF Service Template | 60
Congure Data Center Interconnect (DCI) | 64
Data Center Interconnect Overview | 64
Data Center Interconnect Conguraon Overview | 65
Assign Device Roles for Border Spine and Border Leaf Devices | 67
Manually Congure BGP Peering | 68
Congure Virtual Networks | 71
Create Virtual Port Groups | 72
Create Logical Routers | 73
Create Data Center Interconnect | 74
Verify Data Center Interconnect | 75
iv
About This Guide
Experience the power of automaon by building and running your data center with Contrail Enterprise
Mulcloud, which gives network operators the exibility they need to build a state-of-the-art data
center fabric from a single point of management. The Data Center: Contrail Enterprise Mulcloud for
Fabric Management Guide shows how to build a private data center with an EVPN/VXLAN
infrastructure.
v
1
CHAPTER
Data Center Fabric Management-
Overview
About This Soluons Guide | 2
Terminology | 2
Contrail Enterprise Mulcloud Architecture | 6
Contrail Enterprise Mulcloud Components | 10
About This Soluons Guide
The purpose of this guide is to provide networking professionals with the concepts and tools needed to
build data center fabrics and mulcloud networks on an EVPN/VXLAN infrastructure by using Contrail
Enterprise Mulcloud and QFX Series switches.
The intended audience for this guide includes system integrators, infrastructure professionals, partners,
and customers.
Terminology
IN THIS SECTION
Glossary Terms | 2
This secon provides a summary of commonly used terms, protocols, and building block technologies
used in creang and maintaining data center networks.
Glossary Terms
ARP-Address Resoluon Protocol. A protocol dened in RFC 826 for mapping a logical IP address to
a physical MAC address.
Backbone Device-A device in the WAN cloud that is directly connected to a spine device or devices
in a data center. Backbone devices are required in this reference topology to provide physical
connecvity between data centers that are interconnected using a data center interconnect (DCI).
Border Leaf-A device that typically has the sole purpose of providing a connecon to one or more
external devices. The external devices, for example, mulcast gateways or data center gateways,
provide addional funconality to the IP fabric.
Border Spine-A device that typically has two roles-a network underlay device and a border device
that provides a connecon to one or more external devices. The external devices, for example,
mulcast gateways or data center gateways, provide addional funconality to the IP fabric.
2
Bridged Overlay-An Ethernet-based overlay service designed for data center environments that do
not require roung within an EVPN/VXLAN fabric. IP roung can provided externally to the fabric as
needed.
BUM-Broadcast, Unknown Unicast, and Mulcast. The BUM acronym collecvely idenes the three
trac types.
Centrally-Routed Bridging Overlay-A form of IRB overlay that provides roung at a central gateway
and bridging at the edge of the overlay network. In an IRB overlay, a routed overlay and one or more
bridged overlays connect at one or more locaons through the use of IRB interfaces.
Clos Network-A mulstage network topology rst developed by Charles Clos for telephone
networks that provides mulple paths to a desnaon at each stage of the topology. Non-blocking
networks are possible in a Clos-based topology.
Collapsed Spine fabric-An EVPN fabric design in which the overlay funcons are collapsed onto the
spine layer rather than distributed between spine and leaf devices. This type of fabric has no leaf
layer in the EVPN core; the spine devices connect directly to the access layer.
Contrail Command-Contrail Enterprise Mulcloud user interface. Provides a consolidated, easy-to-
use soware designed to automate the creaon and management of data center networks.
Contrail Enterprise Mulcloud-A suite of products and soware that combines Contrail Command as
a single point of management for private and public clouds, QFX Series switches running Junos OS as
an infrastructure for data center networking, and Contrail Insights (formerly known as AppFormix) for
telemetry and network visualizaon.
DCI-Data Center Interconnect. The technology used to interconnect separate data centers.
Default instance-A global instance in a Juniper Networks device that hosts the primary roung table
such as inet.0 (default roung instance) and the primary MAC address table (default switching
instance).
DHCP relay-A funcon that allows a DHCP server and client to exchange DHCP messages over the
network when they are not in the same Ethernet broadcast domain. DHCP relay is typically
implemented at a default gateway.
EBGP-External BGP. A roung protocol used to exchange roung informaon between autonomous
networks. It has also been used more recently in place of a tradional Interior Gateway Protocols,
such as IS-IS and OSPF, for roung within an IP fabric.
Edge-Routed Bridging Overlay-A form of IRB overlay that provides roung and bridging at the edge
of the overlay network.
End System-An endpoint device that connects into the data center. An end system can be a wide
range of equipment but is oen a server, a router, or another networking device in the data center.
3
ESI-Ethernet segment idener. An ESI is a 10-octet integer that idenes a unique Ethernet
segment in EVPN. In this blueprint architecture, LAGs with member links on dierent access devices
are assigned a unique ESI to enable Ethernet mulhoming.
Ethernet-connected Mulhoming-An Ethernet-connected end system that connects to the network
using Ethernet access interfaces on two or more devices.
EVPN-Ethernet Virtual Private Network. A VPN technology that supports bridged, routed, and
hybrid network overlay services. EVPN is dened in RFC 7432 with extensions dened in a number
of IETF dra standards.
EVPN Type 2 Route-Adverses MAC addresses and the associated IP addresses from end systems to
devices parcipang in EVPN.
IBGP-Internal BGP. In this blueprint architecture, IBGP with Mulprotocol BGP (MP-IBGP) is used for
EVPN signalling between the devices in the overlay.
IP Fabric-An all-IP fabric network infrastructure that provides mulple symmetric paths between all
devices in the fabric. We support an IP Fabric with IPv4 (an IPv4 Fabric) with all architectures
described in this guide, and an IP Fabric with IPv6 (an IPv6 Fabric) with some architectures and on
some plaorms.
IP-connected Mulhoming-An IP-connected end system that connects to the network using IP
access interfaces on two or more devices.
IRB-Integrated Roung and Bridging. A technique that enables roung between VLANs and allows
trac to be routed or bridged based on whether the desnaon is outside or inside of a bridging
domain. To acvate IRB, you associate a logical interface (IRB interface) with a VLAN and congure
the IRB interface with an IP address for the VLAN subnet.
Leaf Device-An access level network device in an IP fabric topology. End systems connect to the leaf
devices in this blueprint architecture.
MAC-VRF instance-A roung instance type that enables you to congure mulple customer-specic
EVPN instances instead of only one instance (the default instance). MAC-VRF instances use a
consistent conguraon style across plaorms. Dierent MAC-VRF instances can support dierent
Ethernet service types (VLAN-aware and VLAN-based services) on the same device in a data center.
Mulservice Cloud Data Center Network-A data center network that opmizes the use of available
compute, storage, and network access interfaces by allowing them to be shared exibly across
diverse applicaons, tenants, and use cases.
NDP-Neighbor Discovery Protocol. An IPv6 protocol dened in RFC 4861 that combines the
funconality of ARP and ICMP, and adds other enhanced capabilies.
NVE-As dened in RFC 8365, A Network Virtualizaon Overlay Soluon Using Ethernet VPN
(EVPN), a network virtualizaon edge is a device that terminates a VXLAN tunnel in a network
4
virtualizaon overlay. For example, in a centrally routed bridging overlay, we consider spine and leaf
devices to be NVE devices.
NVO-A network virtualizaon overlay is a fabric in which we use EVPN as a control plane and
VXLAN as a data plane.
Routed Overlay-An IP-based overlay service where no Ethernet bridging is required. Also referred to
as an IP VPN. In this blueprint architecture, the routed overlay is based on EVPN Type 5 routes and
their associated procedures, and supported by VXLAN tunneling.
Spine Device-A centrally-located device in an IP fabric topology that has a connecon to each leaf
device.
Storm Control-Feature that prevents BUM trac storms by monitoring BUM trac levels and taking
a specied acon to limit BUM trac forwarding when a specied trac level is exceeded.
Underlay Network-A network that provides basic network connecvity between devices.In this
blueprint architecture, the underlay network is an IP Fabric that provides the basic IP connecvity,
usually with IPv4. We also support an IP Fabric with an IPv6 underlay with some architecture designs
on certain plaorms.
VLAN trunking-The ability for one interface to support mulple VLANs.
VNI-VXLAN Network Idener. Uniquely idenes a VXLAN virtual network. A VNI encoded in a
VXLAN header can support 16 million virtual networks.
VTEP-VXLAN Tunnel Endpoint. A loopback or virtual interface where trac enters and exits a
VXLAN tunnel. Tenant trac is encapsulated into VXLAN packets at a source VTEP, and de-
encapsulated when the trac leaves the VXLAN tunnel at a remote VTEP.
VXLAN-Virtual Extensible LAN. Network virtualizaon tunneling protocol dened in RFC 7348 used
to build virtual networks over an IP-routed infrastructure. VXLAN is used to tunnel tenant trac over
the IP fabric underlay from a source endpoint at an ingress device to a desnaon endpoint at the
egress device. These tunnels are established dynamically by EVPN. Each VTEP device adverses its
loopback address in the underlay network for VXLAN tunnel reachability between VTEP devices.
VXLAN Stching-A Data Center Interconnect (DCI) feature that supports Layer 2 interconnecon
EVPN-VXLAN fabric on a per-VXLAN VNI basis.
RELATED DOCUMENTATION
Contrail Enterprise Mulcloud Components | 10
5
Contrail Enterprise Mulcloud Architecture
IN THIS SECTION
Contrail Enterprise Mulcloud Introducon | 6
Benets of the Contrail Enterprise Mulcloud Soluon | 8
Benets of Using an SDN Controller for an EVPN/VXLAN Fabric | 9
Contrail Enterprise Mulcloud Introducon
Contrail Enterprise Mulcloud (CEM) is an intent-based networking system that automates the lifecycle
of the infrastructure and applicaon networking services on mulcloud infrastructures that are
composed of private clouds (data centers, virtualized computes) and public clouds.
For physical data center IP fabrics CEM supports the whole scope of network operaons by:
Automang Day 0 conguraon of the infrastructure. This involves conguring a set of factory reset
devices into a completely funconal fabric or the onboarding of an exisng fabric. It can also enable
multenant applicaon networking services and perform hitless OS upgrades on devices that run
Junos OS.
Automang Day 1 service operaons such as provisioning VLANs on access ports, inter-subnet
roung services, advanced inter-tenant services, across the fabric, and where needed extending
those services across physical and virtualized endpoints.
Automang Day 2 maintenance and scale out procedures of the fabric (adding devices to the fabric,
replacing failed devices, diverng trac from devices).
Collecng ow usage counters, streaming telemetry, alarms, counters from devices.
Poviding a view on how the infrastructure and multenant services are performing, idenfying
capacity bolenecks or trac anomalies with respect to the baseline
Providing workows and methods to proacvely idenfy issues, faults, correlate them and take the
relevant correcve acons.
This guide focuses on Contrail Enterprise Mulcloud as the central point of management of EVPN/
VXLAN data center fabrics as shown in Figure 1 on page 7.
6
Figure 1: Contrail Enterprise Mulcloud
The components focused on in this guide are seen in Figure 2 on page 8, namely Contrail Command
as a central point of management and EVPN/VXLAN as the key data center technology.
7
Figure 2: Contrail Enterprise Mulcloud—Data Center Fabric Management
Benets of the Contrail Enterprise Mulcloud Soluon
The following points explain why Juniper Networks Contrail Enterprise Mulcloud soluon can meet
your needs for a mulcloud environment:
Intent based automaon and lifecycle management of the fabric using Contrail Command removes
requirement to manually congure the fabric using the JunosOS CLI.
Cloud ready architecture—ready to take you from private data centers all the way to the mul-cloud.
An architecture that provides seamless integraon of physical and virtual VXLAN tunnel endpoints
(VTEP) by using vRouter technology.
Built-in management security that provides end–to-end security and a uniform security posture by
using Contrail Security and BMS policy management.
Takes the dream of a self-healing, self-driven network closer to reality with help from ght
integraon of Contrail Command, AppFormix and Junos OS components.
Support for Red Hat Enterprise Linux (RHEL), VMware integraon, and integraon with Nutanix.
8
Open-standards based.
Benets of Using an SDN Controller for an EVPN/VXLAN Fabric
When you combine an IP fabric with an EVPN/VXLAN environment, there is a clear demarcaon
between
underlay
operaons (related to the infrastructure, devices, and soware images) and
overlay
networking services (VLANs carried over VXLAN tunnels using MP-BGP EVPN protocol). Unlike
proprietary fabrics, an EVPN/VXLAN fabric uses a role-denion for every device based on capabilies
and locaon of roung between virtual networks.
EVPN is a powerful technology that can be used in a range of use cases. It brings multenant,
mulservice capabilies to producon data centers, corporate branches and campuses, enterprises, and
service provider wide-area networks (WANs). EVPN can be deployed in network hardware, servers, and
even the public cloud. It supports networking funcons from Layer 1 to Layer 4 in a relavely simple,
integrated, and scalable manner. Last but not least, EVPN has been widely adopted across the
networking industry.
With the context provided above, let’s look at the reasons an enterprise should deploy a controller for
an EVPN/VXLAN environment and why Contrail Enterprise Mulcloud is the best tool for the job:
Provides a fast and error-free infrastructure to build operaons with minimal human eort. Contrail
Enterprise Mulcloud brings simple abstracons of composable service building blocks that it
translates into underlying network building blocks including EVPN. Service layer building blocks
include constructs such as tenants, subnets, policies, gateways, and service chains with associated
service-level controls.
Brings day 1 plug-and-play baselining of the physical network fabric and enables an operator to
perform day 2 funcons at an intent/service level rather than at a technology level. For example,
Contrail Enterprise Mulcloud helps create the fabric, create tenants, add subnets to tenants, aach
endpoints to subnets, build and apply policies to tenants, and so on.
Adds smart life cycle management (LCM) capabilies to the fabric. Contrail Enterprise Mulcloud
oers life cycle management of the fabric.
Opens endless posive business outcomes. For example, fabric-wide visibility and control of crical
operaons means a is a very short lead me for daily operaons, such as VLAN Tickets.
Simplies network management using normalized conguraons/operaons for all tenant services.
Contrail Enterprise Mulcloud eliminates manual procedures and complexity of performing those
operaons, while leaving network operators in full control of which operaons should be performed
in the data center fabric.
9
Adds openness at all layers. This quality is unique to Contrail Enterprise Mulcloud and makes it
stand out from all other controllers in the industry. From the use of open standards technology in the
network (such as EVPN/VXLAN, Ansible and Jinja2 templates, and APIs for everything), Contrail
frees operators from the pialls of closed proprietary systems.
Achieves realme visibility on the usage of infrastructure by each tenant service to minimize
outages. Contrail Enterprise Mulcloud provides a single pane of glass to display the health and
status of the infrastructure. Being able to use a single tool to analyze which services and tenants are
impacted provides operators invaluable insights, which minimizes the impact of outages and reduces
them to a minimum.
RELATED DOCUMENTATION
Contrail Enterprise Mulcloud Components | 10
Data Center Fabric Design Overview and Validated Topology | 21
Contrail Enterprise Mulcloud Components
IN THIS SECTION
Contrail Command | 11
IP Fabrics | 12
Device Roles | 14
Virtual Networks | 16
Virtual Port Groups | 17
Logical Routers | 19
This secon provides an overview of the components used in this soluon. The implementaon of each
component is covered in later secons of this guide.
10
Contrail Command
Contrail Command is the CEM user interface that provides a user-friendly interface that lets you
provision your private data center network and extend it to the public cloud. It automates the process of
building a new data center from scratch, adding an overlay to an exisng IP fabric, and providing
switching, roung, and security services. As a result, Contrail Command is a powerful component of the
CEM soluon.
Contrail Command provides an intuive user interface for the user—whether a network operaon
administrator, a cloud administrator, or a tenant administrator—to perform any infrastructure and service
operaons on the managed mulcloud infrastructure. Some of the infrastructure and troubleshoong
operaons that you can perform with Contrail Command are:
Performs Junos OS upgrades on devices in the fabric.
Automates service operaons, such as provisioning VLANs on access ports, inter-subnet roung
services, advanced inter-tenant services, across the fabric, and extending those services across
physical and virtualized endpoints
Automates maintenance and scale out procedures of the fabric (adding devices to the fabric,
replacing failed devices, diverng trac from devices)
Collects ow usage counters, streaming telemetry, alarms, counters from the devices and providing a
view on how the infrastructure and the multenant services are performing, idenfying capacity
bolenecks or trac anomalies with respect to the baseline
Provides workows and methods to proacvely idenfy issues and faults, correlate them and take
correcve acons.
SEE ALSO
Create a Greeneld Deployment For a New Data Center Network | 23
11
IP Fabrics
IN THIS SECTION
New IP Fabric | 12
Exisng IP Fabric | 13
An IP fabric is a set of devices and physical network funcons (PNFs) that fall under the same data
center administrator responsibility area. You can provision new or exisng fabrics using CEM.
New IP Fabric
When you build a brand new data center network, you can use CEM to automate the provisioning and
conguraon of your fabric. The new fabric process (also known as a greeneld deployment) provisions
new devices to form an IP Clos data center topology with leaf (TOR) and spine devices. Leaf switches
are typically QFX5K devices and spine devices are typically QFX10K devices, but there are many
variants.
CEM creates an IP fabric underlay network built on EBGP, combined with an EVPN/VXLAN overlay
supported by IBGP. It discovers the data center topology, assigns IP addresses to the loopback
interfaces, creates fabric subnets, loopback subnets, and management subnets, and congures the EBGP
underlay and the IBGP overlay.
For IP fabric greeneld onboarding, CEM uses Zero-Touch Provisioning (ZTP) to inialize and congure
factory-default devices. The shaded area of Figure 3 on page 13 shows the scope of ZTP operaons.
These devices are connected to an out-of-band management network that is provisioned before the
new fabric process is run. All of the devices must be reachable on the management network.
Aer you have run the inial ZTP process on your fabric, you can easily assign device roles, create
device-specic conguraons, and push the conguraons to each device.
12
Figure 3: Scope of Zero Touch Provisioning
Exisng IP Fabric
If you already have a deployed data center infrastructure for which you have congured the underlay of
the data center fabric., your network is considered a browneld network. To migrate to an overlay, you
only need to let Contrail Command discover the exisng topology of pre-connected and pre-congured
spine and leaf devices, assign device roles, create device–specic conguraons, and push the
conguraons to each device.
SEE ALSO
Create a Greeneld Deployment For a New Data Center Network | 23
Create a Browneld Deployment to Add an Overlay to an Exisng IP Fabric | 34
IP Fabric Underlay Network Design and Implementaon
Data Center EVPN-VXLAN Fabric Architecture Guide
13
Device Roles
Assigning roles to devices tailors the data center design for distributed or centralized roung. There are
two types of roles that you assign to devices in your fabric.
Physical roles
Roung bridging (overlay) roles
The roles dene the roung and bridging responsibilies of the device. A device can have one physical
role and one or more roung bridging roles.
In centrally-routed bridging (CRB) devices, when you congure the logical router to allow trac to ow
between Ethernet virtual network instances, roung occurs at the spine device. Trac is routed from
the leaf to the spine and back. IRB interfaces are congured in the overlay at each spine device to route
trac between virtual networks.
In edge-routed bridging (ERB) you can congure the ERB-UCAST-Gateway role on a border leaf device,
which causes roung to occur at the leaf switch. The IRB interfaces are congured at the leaf switch to
enable unicast trac roung at the leaf switch.
Table 1 on page 14 shows the roung and bridging roles that are available for each of the physical
roles.
Table 1: Physical and Roung Bridging Roles in Contrail
Physical Role Roung Bridging Role Descripon
Spine Route-Reector Species that the device acts as a route reector for IBGP to sasfy
the full-mesh requirement and enable scalability for the fabric.
Usually all spine devices or gateways are given this role.
Null Species that the device is used in an IP roung environment that
provides only underlay roung; does not parcipate in VXLAN
tunnels. Applies only to spine devices when edge roung and
bridging is used
CRB-Gateway Provides Layer 3 Unicast gateway funconality for roung between
VNIs using an IRB interface. Roung is done centrally on a spine
14
Table 1: Physical and Roung Bridging Roles in Contrail
(Connued)
Physical Role Roung Bridging Role Descripon
CRB-MCAST-Gateway Provides Layer 3 mulcast gateway for roung between VNIs using
an IRB interface. Mulcast roung is done centrally.
It is not supported on QFX5100 and 5200 hardware families.
DC-Gateway Provides a roung connecon for trac to exit the data center
Devices that provide connecvity to external networks or between
fabrics.
DCI-Gateway Interconnects a logical router in one data center to a logical router
in a dierent data center using EVPN VXLAN Type 5 routes.
PNF-Servicechain Species the devices where the Layer 3 PNF is aached.
AR-Replicator Species that the device eplicates BUM trac from one overlay
tunnel to others.
AR-Client Species that the device sends BUM trac to an AR replicator.
Leaf CRB-Access Species devices in a CRB architecture that perform only Layer 2
VXLAN funconality (bridging).
ERB-UCAST-Gateway Species that roung occurs at the leaf switch. IRB interfaces are
congured at the leaf switch to enable unicast trac roung.
While unicast trac can be routed at the leaf switches, mulcast
trac roung sll occurs at the spine devices. In contrast, the CRB-
Gateway role at a spine is capable of roung both unicast and
mulcast trac. With ERB, leaf switches route the unicast trac,
which makes the conguraon of the CRB-Gateway role on the
spine unnecessary since unicast trac will never be routed to a
spine device. Instead, you must congure the spine devices with the
CRB-MCAST-Gateway role to route mulcast trac when required.
15
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68
  • Page 69 69
  • Page 70 70
  • Page 71 71
  • Page 72 72
  • Page 73 73
  • Page 74 74
  • Page 75 75
  • Page 76 76
  • Page 77 77
  • Page 78 78
  • Page 79 79
  • Page 80 80
  • Page 81 81

Juniper Contrail Networking User guide

Type
User guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI