Dell DR4100 Owner's manual

Category
Software
Type
Owner's manual
A Dell Technical White Paper
DR4100 Best Practice Guide
Part 1: Setup, Replication and Networking
Dell Data Protection Group
April 2014
2 DR4100 Best Practice Guide | April 2014
Revisions
Date
Description
April 2014
Initial release
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED
WARRANTIES OF ANY KIND.
© 2014 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission
of Dell Inc. is strictly forbidden. For more information, contact Dell.
PRODUCT WARRANTIES APPLICABLE TO THE DELL PRODUCTS DESCRIBED IN THIS DOCUMENT MAY BE FOUND
AT: http://www.dell.com/learn/us/en/19/terms-of-sale-commercial-and-public-sector Performance of network reference architectures
discussed in this document may vary with differing deployment conditions, network loads, and the like. Third party products may be
included in reference architectures for the convenience of the reader. Inclusion of such third party products does not necessarily
constitute Dell’s recommendation of those products. Please consult your Dell representative for additional information.
Trademarks used in this text:
Dell™, the Dell logo, Dell Boomi™, Dell Precision™ ,OptiPlex™, Latitude™, PowerEdge™, PowerVault™, PowerConnect™,
OpenManage™, EqualLogic™, Compellent™, KACE™, FlexAddress™, Force10™ and Vostro™ are trademarks of Dell Inc. Other
Dell trademarks may be used in this document. Cisco Nexus®, Cisco MDS
®
, Cisco NX-0S
®
, and other Cisco Catalyst
®
are registered
trademarks of Cisco System Inc. EMC VNX
®
, and EMC Unisphere
®
are registered trademarks of EMC Corporation. Intel
®
, Pentium
®
,
Xeon
®
, Core
®
and Celeron
®
are registered trademarks of Intel Corporation in the U.S. and other countries. AMD
®
is a registered
trademark and AMD Opteron™, AMD Phenom™ and AMD Sempron™ are trademarks of Advanced Micro Devices, Inc. Microsoft
®
,
Windows
®
, Windows Server
®
, Internet Explorer
®
, MS-DOS
®
, Windows Vista
®
and Active Directory
®
are either trademarks or
registered trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat
®
and Red Hat
®
Enterprise Linux
®
are registered trademarks of Red Hat, Inc. in the United States and/or other countries. Novell
®
and SUSE
®
are registered trademarks of
Novell Inc. in the United States and other countries. Oracle
®
is a registered trademark of Oracle Corporation and/or its affiliates.
Citrix
®
, Xen
®
, XenServer
®
and XenMotion
®
are either registered trademarks or trademarks of Citrix Systems, Inc. in the United States
and/or other countries. VMware
®
, Virtual SMP
®
, vMotion
®
, vCenter
®
and vSphere
®
are registered trademarks or trademarks of
VMware, Inc. in the United States or other countries. IBM
®
is a registered trademark of International Business Machines Corporation.
Broadcom
®
and NetXtreme
®
are registered trademarks of Broadcom Corporation. Qlogic is a registered trademark of QLogic
Corporation. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and/or
names or their products and are the property of their respective owners. Dell disclaims proprietary interest in the marks and names of
others.
3 DR4100 Best Practice Guide | April 2014
Table of contents
Revisions ............................................................................................................................................................................................... 2!
Executive summary ............................................................................................................................................................................... 4!
1! Administration guides ..................................................................................................................................................................... 5!
2! Best practice guides ........................................................................................................................................................................ 6!
3! Case studies .................................................................................................................................................................................... 7!
4! System setup ................................................................................................................................................................................... 8!
4.1! Hardware .............................................................................................................................................................................. 8!
4.2! Expansion Shelves ................................................................................................................................................................ 8!
4.3! Initial out of the box setup .................................................................................................................................................... 8!
4.4! Container creation ................................................................................................................................................................ 9!
4.4.1! Container limit ..................................................................................................................................................................... 9!
4.4.2! Access Protocols ................................................................................................................................................................ 10!
4.4.3! Marker Support .................................................................................................................................................................. 11!
5! Replication setup and planning ..................................................................................................................................................... 12!
5.1! Replication considerations .................................................................................................................................................. 12!
5.2! Replicating Containers ....................................................................................................................................................... 12!
5.3! Calculate Replication Interval ............................................................................................................................................ 12!
5.4! Bandwidth Optimization .................................................................................................................................................... 13!
5.5! Replication Encryption ....................................................................................................................................................... 14!
5.6! Addressing Packet Loss ...................................................................................................................................................... 14!
5.7! Replication Tuning ............................................................................................................................................................. 14!
5.7.1! High Bandwidth Conditions .............................................................................................................................................. 14!
5.7.2! Reducing Bandwidth Concerns ......................................................................................................................................... 14!
5.8! Domain Access ................................................................................................................................................................... 15!
6! Networking ................................................................................................................................................................................... 16!
6.1! Supported Network Cards .................................................................................................................................................. 16!
6.2! Network Interface Card Bonding ....................................................................................................................................... 17!
6.3! Secure Network Separation ................................................................................................................................................ 19!
6.4! Troubleshooting .................................................................................................................................................................. 40!
A! Resources ..................................................................................................................................................................................... 41!
4 DR4100 Best Practice Guide | April 2014
Executive summary
This document contains some of the best practices for deploying, configuring, and maintaining a Dell DR 4x00 backup
and deduplication appliance in a production environment. Following these best practices can help ensure the product is
optimally configured for a given environment.
Note: this guide is not a replacement for the administrator’s guide. In most cases, this guide provides only a high level
description of the problem and solution. Please refer to the administrator guide for the specific options and steps
required to implement the recommended actions.
Please consider the administrator’s guide as a pre-requisite for this paper, or at least as a reference guide on how to
complete the tasks listed here in this guide. The administrator’s guide for 2.1 can be found in the list of links below.
In addition to this document, we recommend the following documents for the advanced reader.
5 DR4100 Best Practice Guide | April 2014
1 Administration guides
Dell DR Series System Administrator Guide
Dell DR4100 Systems Owner’s Manual
Dell DR Series System Command Line Reference Guide
6 DR4100 Best Practice Guide | April 2014
2 Best practice guides
Best Practices for Setting up NetVault Backup Native Virtual Tape Library (nVTL)
Best Practices for Setting up NetVault SmartDisk
DR4X00 Disk Backup Appliance Setup Guide for CommVault Simpana 10
Setting up EMC Networker on the Dell DR4X00 Disk Backup Appliance Through CIFS
Setting up EMC Networker on the Dell DR4X00 Disk Backup Appliance Through NFS
Setting up Veeam on the Dell DR4X00 Disk Backup Appliance
Setting up vRanger on the Dell DR4X00 Disk Backup Appliance
Setup Guide for Symantec Backup Exec 2010 R3
7 DR4100 Best Practice Guide | April 2014
3 Case studies
Haggar Case Study
CGD Case Study
TAGAL Steel Case Study
DCIG DR4X00 and EqualLogic Better Together
ESG DR4X00 Lab Validation
Pacific BioSciences Case Study
8 DR4100 Best Practice Guide | April 2014
4 System setup
4.1 Hardware
When the DR ships to a customer’s site and is powered on for the first time, the system still needs to complete a
background initialization (init.) of the RAID. During this background init., the system may seem sluggish or write slower
than expected. This will resolve itself within 24 hours
4.2 Expansion Shelves
The proper boot procedure for the DR appliance is to first power on the DR expansion shelf/shelves and then connect it
to the DR appliance. After connected to the appliance, power on the DR appliance and ensure the expansion license(s)
are available or ready to add.
4.3 Initial out of the box setup
This section outlines the various items that must be configured when the DR appliance first powered on.
Registration: The DR product has the ability to notify administrators of recent updates. Simply register the DR
appliance and ask for updates. As new releases are posted on the web, update notifications will be emailed
alerting the administrator of new upgrades that are available. It is recommended that administrators enable this
option during registration.
Setting up alerts: The DR product has built in monitoring through Dell Open Manage which is installed on the
appliance. All critical software and hardware alerts are sent as alerts via email. All hardware related events are
also sent as a trap over SNMP to the configured monitoring server.
It is highly recommended that alerts are configured on the DR appliance and are sent to a global distribution list
so they can be monitored and resolved quickly.
Password reset: The DR appliance has the ability to allow the administrator to reset the administrator
password. To ensure security, administrators should setup password reset immediately. This ensures that in the
future if the admin password is forgotten it can be securely reset.
Joining the domain: If the DR is to be joined to an Active Directory domain it is recommend that this action is
performed from the start. Doing so will allow ACLs to be applied and domain users can be used to access the
data from the backup application.
Adding the DR to Active Directory: Logon to the DR GUI, Click on Active Directory in the side panel, click
on join in the upper right hand corner and enter in the name of the domain and credentials to add the DR to the
domain.
It will now be possible to logon to the DR appliance using the GUI\CLI via Domain\user for any users that are
in the global group.
9 DR4100 Best Practice Guide | April 2014
To allow multiple groups to logon to the DR appliance using Active Directory do the following:
- Create a new global group in Active Directory
- Add each group to be allowed to access DR product to this global group.
- Add the new global group to the DR using the following command from the CLI:
authenticate --add --login_group “domain\ group ”
Users that are part of the selected AD group will be able to logon to the CLI and GUI to administer the device.
Setting ACLs and inheritance: It is best to set ACLs and inheritance when the system is first setup. By
default, every user has access to all data. Attempting to change permissions and inheritance later is very time
consuming.
Set Time: If the DR is not joined to an Active Directory domain, it is best to configure the DR to use NTP. If
the DR is joined to an Active Directory it will automatically sync its time.
4.4 Container creation
The DR appliance uses containers to store backup data. These containers are segmented folders that have individual
protocols, security permissions, marker types and connection types.
Below are considerations to take into account when creating containers:
1. Only 32 containers allowed on a single appliance
2. Access protocols are set at a container level (NFS, CIFS, OST, RDS)
3. Security is set at a container level (Locking down Via IP, Unix/Windows ACLs, etc.)
4. Markers are set at the container level (None, Auto, CommVault, Networker, etc.)
5. OST quotas are set on a container level
6. Container Names cannot contain spaces
4.4.1 Container limit
There are several different strategies in which to approach container creation. In general, it is better to have as few
containers as possible to maintain ease of management. This section is designed to assist in developing an optimal
container strategy based on your organization’s needs.
With most configurations where replication is not required, it is common to have a single container if the administrators
have a single Data Management Application (DMA). When replication is required, container creation can become more
complex, so it is best to choose what containers should and should not be replicated as well as prioritizing what data
should be replicated first. Below are four scenarios to help with picking the proper container creation strategy:
Scenario 1: Separate data to be replicated vs. data not to be replicated
Scenario 2: Separate data that has a higher value to replicate vs. lower value
Scenario 3: Separate different types of data into different containers
Scenario 4: Separate different DMA types into different containers
10 DR4100 Best Practice Guide | April 2014
4.4.1.1 Scenario 1: Separate data to be replicated vs. data not to be replicated
Robert has Exchange data that is required to have 2 copies, with 1 copy maintained offsite. Robert also has VM data,
which is NOT required to have 2 copies.
Recommendation: Robert should have the following two containers:
- Container 1. For the Exchange data so that it can be replicated each week off site.
- Container 2. For the local VM data so that it is not replicated and does not take up valuable WAN
bandwidth.
4.4.1.2 Scenario 2: Separate data that has a higher value to replicate vs. lower value
Robert also has intellectual property (IP) that he would like to have replicated offsite each day.
Recommendation: Robert should create a third container and enable replication schedules on all containers
(giving more time to the third container) to ensure replication of the IP container competes each day.
4.4.1.3 Scenario 3: Separate different types of data into different containers
Assume that Robert also has SQL data that bypasses the DMA and is written directly to the DR.
Recommendation: Robert should create a forth container to allow the container to be locked down just to that
SQL server, as well as to allow independent access which does not interfere with the DMA.
4.4.1.4 Scenario 4: Separate different DMA types into different containers
Robert also has a VM infrastructure that he wishes to protect using Dell vRanger. This is in addition to his other DMA
which is used to protect their physical servers.
Recommendation: Create a fifth container for vRanger data. This will allow the most flexibility in terms of
locking down the NFS/CIFS share for vRanger, it also allows for the most flexibility in terms of replicating the
data offsite in the future.
4.4.2 Access Protocols
Access protocols are set on a per container basis when a container is created. In some cases these cannot be changed. The
protocol options are options are as follows:
CIFS or NFS only
CIFS and NFS together (although cross protocol support is not supported)
OST Only
RDS Only
It is possible to add or remove CIFS/NFS access. However, once the container has RDS/OST added it is not changeable.
Security is set at a container level (Locking down Via IP, Unix/Windows ACLs, etc.).
For CIFS shares it is recommended that the shares are set with the most restrictive ACLs, and further locked down by the
IP or DNS name of machines that are allowed to connect to that container.
11 DR4100 Best Practice Guide | April 2014
For NFS shares, it is recommended that root user be set to nobody and that NFS shares are further locked down by the IP
or DNS name of the machines that are allowed to connect to that container.
4.4.3 Marker Support
Many DMAs add metadata into the backup stream to enable them to find, validate and restore data they wrote into the
file. This metadata makes the data appear unique to dedupe enabled storage. In order to properly dedupe the data, the
markers need to be removed before the stream is processed.
In the DR, markers are set on a container level and are set to auto by default. This allows all known detected markers to
be stripped before the data is processed. In situations where the DMA does not have markers, or the DMA is known, a
slight performance increase can be had by setting the markers correctly.
If the DMA in use is BackupExec or Netbackup the marker should be set to none. For other DMAs such as Commvault,
Networker, Arcserve, or any of the other supported markers, set the marker to the type that corresponds to the
appropriate DMA.
Note: Changing the marker type after data is ingested could make the data appear unique, causing dedupe savings to be
adversely impacted until all ingested data with the previous marker is removed. To avoid this problem, the proper
marker should be set on containers from the start.
12 DR4100 Best Practice Guide | April 2014
5 Replication setup and planning
The DR appliance provides robust replication capabilities to provide a complete backup solution for multi-site
environments. With WAN optimized replication, only unique data is transferred to reduce network traffic and improve
recovery times. Replication can also be scheduled to occur during non-peak periods, and prioritizes ingest data over
replication to ensure optimal backup windows. The following sections cover the various considerations and planning that
should be taken into account for replication with the DR4100 backup appliance. As always, the information provided
below are guidelines and best practices and are meant to be supplemental to the information provided in the DR
administration guide.
5.1 Replication considerations
Replication can support up to 32 concurrent replications to a single appliance.
Replicated data is already compressed and deduplicated prior to its transfer to the destination DR appliance.
This results in approximately 85%- 90% reduction in data being transferred from the source to the target device.
Bandwidth throttling works between pairs of devices.
Replication supports none, 128bit, 256bit and encryption options. 128 bit encryption is recommended.
Replication uses a 10MB TCP window by default. Contact support if this is needed to be adjusted higher for
high latency/low bandwidth links.
Replication can be scheduled on a per container basis.
Container names should match on each DR to simplify disaster recovery.
Replication also replicates CIFS and NFS security bits. For CIFS shares the target DR also needs to be in the
same domain or forest as the source DR for ACLs to be applied correctly.
5.2 Replicating Containers
The maximum amount of containers supported is 32 per appliance. For optimum replication performance, it is
recommended that the number of replication containers be kept at a minimum.
When planning larger deployments, the following recommendations should be considered to maintain an acceptable
level of performance:
1. In larger environments it can be easy to quickly approach the 32 container limit. A simple method to reduce the
number of replicated containers is to leverage container directories.
2. In some scenarios it may become necessary to replicate more than 32 containers to a single physical site. In
such a situation it is required to utilize two separate head units and fewer expansion shelves in order to provide
the necessary resources for improved performance.
5.3 Calculate Replication Interval
Calculating the required bandwidth for replication will assist in properly sizing the infrastructure for maximum
performance. In order to calculate the time required to replicate a given container the following two points of data are
required:
1. Identify the amount of data that will be replicated. A common method is to view current backup jobs and log
files to determine the amount of data being backed up each day. The more precise this number is, the more
accurate the bandwidth calculation will be.
13 DR4100 Best Practice Guide | April 2014
Since any data transferred during replication has already been compressed and deduplicated to roughly 85%-90%
of the original size, start by multiplying the original data size by 15% to determine the amount of data to be
replicated. For example, to transfer two terabytes of data, break it down into megabytes by multiplying the
value by 1048576. To convert 2TB to MB the formula would be: 2TB * 1048576 = 2097152 MB.
Now, reduce this number by 85% and assign it to the variable: replica_data:
replica_data = 2097152 MB * .15 = 314572 MB.
2. Determine the effective bandwidth. A common method to determine bandwidth between two sites utilizes a
freeware product called iPerf (https://code.google.com/p/iperf/). The following steps provide the command line
instructions to calculate bandwidth:
a. Run iperf s w 5M on a server at the central site.
b. Run iperf c <IP of server at central site> -w 5M
c. Capture the resulting bandwidth reported from the server at the central site, showing the calculated
bandwidth between sites.
Assign the acquired bandwidth value to the variable effective_bandwidth. This value may be obtained from
other methods of your choosing, but should be specified in MB for further calculations. A bandwidth value of
10MB will be used for the following example.
3. Determine the acceptable amount of time allowed for replication. Convert the allowed replication time to
seconds.(i.e. 10 hours converted is 10*60*60 = 36000 seconds)
With this information, the following formula can be used to calculate the time required to replicate the data.
replication_interval = replica_data / effective_bandwidth
Using the example of 2TB and a 10MB effective bandwidth, time can be calculated as follows:
replication_interval = 314572 MB / 10 MB = 31457 seconds (8.7 hours)
5.4 Bandwidth Optimization
When utilizing replication it may be necessary to enforce bandwidth limitations. This can lower the impact of replication
traffic on the network. For example, if the WAN link also is required to support Video Conferencing, limits can be
applied to the DR traffic to allow that application the required bandwidth.
Keep in mind, when setting bandwidth throttling policies on a container replicating between two physical DR appliances,
the policy will be applied for all containers between the replicated devices. For example, containers 1 and 2 on a DR
appliance are set to replicate to a secondary DR appliance. When bandwidth throttling is set on container 1, the
bandwidth on the second container will also be throttled.
Replication schedules can be set to ensure one container gets more time replicating (for high priority data) or to schedule
replication to occur in off-peak hours, although in most situations it is recommended to leave the default settings and let
replication transfer data as needed over the network.
14 DR4100 Best Practice Guide | April 2014
When multiple containers are being replicated between the same DR appliances, the replication engine round-robins the
requests across the containers. In this situation the containers may not be replicated in synchronicity if there are large
amounts of data waiting to be transferred.
5.5 Replication Encryption
For best performance vs. security, encryption settings should be set at 128 bit encryption, providing a good balance for
most environments. For situations where the replicated data is being transferred across the open Internet, it is strongly
recommended to tunnel the replication traffic through an encrypted VPN tunnel.
5.6 Addressing Packet Loss
Occasionally a given link between two sites may introduce packet loss, resulting in slow data transfer or replication
processes that terminate due to errors or a simple timeout. When packet loss arises due to slow links, higher speed links,
jitter or high latency, the window size may need to be adjusted to account for this. In such instances please contact the
Dell Support department, and they will assist in custom tuning the window size for the particular link.
5.7 Replication Tuning
In some instances it may be beneficial to prevent replication from running all the time. Consider the following scenarios
when deciding whether to replicate on a per-container basis.
5.7.1 High Bandwidth Conditions
In situations where bandwidth is very high between a source and destination DR appliance, or many sources are
replicating to a single target appliance, the backup window may be negatively impacted when backups occur during the
replication process. If the backup window is unacceptable in such a scenario, the replication should be rescheduled to
occur outside the backup window, or the available bandwidth should be limited to < 50MB/s.
A high bandwidth scenario is when the WAN Bandwidth or Bandwidth between the source and destination DR units is >
100MB/s (800Mbits).
5.7.2 Reducing Bandwidth Concerns
A successful replication configuration interacts with its environment positively. In situations where replication traffic is
negatively impacting other services, fine tuning becomes necessary to limit the impact. The following suggestions will
help make sure replication is not placing a burden on other services.
1. Make certain to schedule the replication process outside the hours where it could impact business. If video
conferencing or other business critical applications are experiencing the effect of a low-bandwidth situation,
verify the replication schedule and make sure it happens during non-business hours. Also, make sure to account
for the backup window when the daily backups are being stored to the DR appliance.
2. In situations where the bandwidth of the impacted services is well known, the bandwidth available to the
replication process can be reduced. Throttling the replication bandwidth requires more time for the replication
process to complete, but leaves room for the impacted services to do their thing. For example, if video
15 DR4100 Best Practice Guide | April 2014
conferencing requires 1MB/s on a 10MB/s link, scale back the available replication bandwidth to 9MB/s,
providing bandwidth for everyone to play nice together.
5.8 Domain Access
In addition to data, NFS and CIFS security information is also replicated between DR appliances. This allows access to
each DR appliance joined to the same domain / forest from user or group accounts with appropriate permissions. Access
to a given DR appliance will be denied when not joined to the same domain/forest. Make certain that all DR appliances
are joined to the same domain, which provides access to all devices without concern for entering different permissions
for each appliance.
Note: CIFS and NFS protocol access is not synced between source and target DRs. It is not known what is at the
remote site so these settings are not transferred. Before failing over, make sure the target DR is setup with the
appropriate protocol access as required.
16 DR4100 Best Practice Guide | April 2014
6 Networking
The DR appliance provides many networking capabilities, designed to further improve the ingest and recovery speeds in
any environment. One such feature is secure separation, allowing network optimization by preventing unnecessary traffic
on the production network via routing the backup, management, and replication traffic to separate network interfaces.
The DR appliance also supports a multiple network interface cards, including 10GbE, providing features such as
adaptive load balancing and dynamic link aggregation. The following sections define the various options and
configurations associated with the DR4100, including steps to optimize the appliance to a given environment. As always,
the information provided below are guidelines and best practices that are meant to be supplemental to the information
provided in the DR administration guide.
6.1 Supported Network Cards
The DR4100 supports various network card configurations. Table 1 outlines the different combinations of network cards
with which it is possible to configure the DR appliance. Both the 1GB and 10GB BASE-T network cards support auto-
negotiation which is enabled by default. The Broadcom 10GB SPF interface has the ability to auto-negotiate when using
the R8H2F transceiver.
When using the Broadcom 10GB SPF network interface card, ensure that the 10GB SR SPF+ Transceiver (R8H2F) is
used. Failure to use the proper transceiver will result in errors on the DR100’s login screen.
Table 1 Supported network card configurations
Combo
Internal (NDC)
Add-On (NIC)
1
Broadcom (FM487)
4 x 1GB BASE-T
Intel Dual (XP0NY)
2 x 1GB BASE-T
Broadcom (57M9)
2 x 1GB BASE-T
2
Intel Quad R1XFC
4 x 1GB BASE-T
Intel Dual (XP0NY)
2 x 1GB BASE-T
Broadcom (57M9)
2 x 1GB BASE-T
3
Intel Dual (P71JP)
2 x 10GB BASE-T 2x 1GB BASE-T
Intel Dual (XP0NY)
2 x 1GB BASE-T
Broadcom (57M9)
2 x 1GB BASE-T
4
Broadcom (MT09V)
2 x 10GB BASE-T + 2 1GB BASE-T
Intel Dual (XP0NY)
2 x 1GB BASE-T
Broadcom (57M9)
2 x 1GB BASE-T
5
Broadcom (Y36FR)
2 x 10GB SFP + 2 x 1GB BASE-T
Intel Dual (XP0NY)
2 x 1GB BASE-T
Broadcom (57M9)
2 x 1GB BASE-T
17 DR4100 Best Practice Guide | April 2014
6.2 Network Interface Card Bonding
Network interface card (NIC) bonding provides additional throughput and/or failover functionality in the event a link is
lost. The DR4100 supports two bonding modes: dynamic link aggregation and adaptive load balancing (802.3ad and
ALB). Each of these modes has their own advantages and disadvantages that should be considered before choosing a
mode.
Dynamic link aggregation (Mode 4 or 802.3ad) creates aggregation groups that utilize the same speed and duplex (i.e.
10GB and 10GB full-duplex links). Mode 4 (See Figure 1) is highly beneficial in increasing speed and bandwidth
capability for multiple data streams, but will not increase the speed or bandwidth capability of a single data stream.
Slave selection for outgoing traffic is executed according to a simple XOR policy. When utilizing mode 4 it is important
to note that the maximum bandwidth available is not always equal to the sum of each link in the bond. Also, always
ensure that the switch(s) being used support 802.3ad Dynamic link.
Figure 1 Dynamic link aggregation
Adaptive load balancing (Mode 6 or ALB), the default load balancing mode, is transmit load balancing with the addition
of receive load balancing (See figure 2). The receive load balancing uses address resolution protocol (ARP) to intercept
packets and reassign destination MAC addresses. This means that traffic is distributed across slave NICs according to
current load. In this mode, it is not possible to mix interfaces of different speeds. (i.e. 10GB and 1GB links) and no
specialized switch support is required. When utilizing mode 6, the total available bandwidth of the bond is equal to the
bandwidth of a single physical connection.
18 DR4100 Best Practice Guide | April 2014
Note: Always ensure that data source system (i.e. systems that send data to the DR4100) are located on the same
subnet. Failure to do so will result in all traffic being sent to the first interface in the bond. This is because adaptive load
balancing cannot properly load balance when data sources are located on a remote subnet. This is a result of ALB use
of (ARP) which is subnet-specific and a router’s inability to send ARP broadcast and updates.
Figure 2 Adaptive load balancing
By default, the DR4100 sets up bonding on its fastest interfaces. If the appliance is configured with NIC combo 3 (see
Table 1) the 2x 10GB interfaces will become the new bond0. If the DR4100 is configured with combo 1, the 4x 1GB
interfaces will become bond0. As of firmware 2.1 the default bond behavior can be changed by utilizing the
auto_bonding_speed parameter. See below for usage example.
To reset the default bond run the command:
network --factory_reset [--auto_bonding_speed <1G|10G>]
After resetting the interface the DR4100 will prompt you to enter a reboot command.
Note: This command has to be issued from iDRAC, KVM or a local interface because it will disconnect any existing
network connections.
System reboot
19 DR4100 Best Practice Guide | April 2014
Note: When creating a bond ensure that the interfaces to bond have active links. This will ensure that the system will
be operational after the bonding of the interfaces is complete.
6.3 Secure Network Separation
Secure network separation is advanced networking functionality that enables administrators to segment traffic to
different NIC’s and subnets (see Table 2). Advanced networking makes it possible to assign backup traffic to one bond,
management traffic to a second bond and replication traffic to a third bond. This section will cover the following
advanced networking scenarios in detail:
Scenario 1: Leverage separate interfaces for management, replication and backup traffic
Scenario 2: Leverage one bonded interface for management, replication and OST traffic and another for backup
traffic
Scenario 3: Replication between sites with dedicated interfaces
Scenario 4: Multiple appliance replication
Scenario 5: Backup to different IPs on a single DR appliance
Table 2 Traffic segmentation
Segment
Management
Replication
Backup
Example
administrator@DR1 >
network -- factory_reset --auto_bonding_speed 1G
Warning: This will stop all system operation and will reset the network configuration to
factory settings and will require a system robbon. Existing configuration will be lost.
Password required to proceed. Please enter the administrator password :
administrator@DR1 > System --reboot
20 DR4100 Best Practice Guide | April 2014
6.3.1.1 Scenario 1: Leverage separate interfaces for management, replication and backup traffic
Sarah, a network administrator, has an office located in Boston and another located in New York (See Figure 3). At each
office she has a DR appliance and wishes to configure separate interfaces for management, backup and replication
traffic. Her desired configuration is as follows:
Use bond0 for management.
Use dedicated 1GB interface for replication traffic.
Use dedicated 2x 10GB bonded interfaces for backup traffic.
Figure 3 Scenario 1 topology
Note: Bond0 should be used for management traffic as it contains the default route and will allow for management of
the device from all accessible subnets. For Backup and Replication Traffic, static routes should be added to allow
communication between the devices.
Hardware configuration
DR1 2x 1GB interfaces and bond0 as 2x 10GB interfaces
DR2 2x 1GB interfaces and 2x 10GB interfaces
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41

Dell DR4100 Owner's manual

Category
Software
Type
Owner's manual

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI