Dell AX-740XD Owner's manual

Type
Owner's manual

This manual is also suitable for

Dell EMC Integrated System for Microsoft
Azure Stack HCI: Creating an Azure Stack
HCI cluster using Windows Admin Center
Deployment Guide
Abstract
This deployment guide provides an overview of Dell EMC Integrated System solutions
for creating a Microsoft Azure Stack hyperconverged infrastructure (HCI) cluster using
Microsoft Windows Admin Center.
Part Number: H18581
September 2021
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Chapter 1: Introduction................................................................................................................. 4
Document overview............................................................................................................................................................ 4
Audience and scope............................................................................................................................................................ 4
Known issues........................................................................................................................................................................ 4
Chapter 2: Solution Overview........................................................................................................5
Solution introduction...........................................................................................................................................................5
Azure Stack HCI deployment models............................................................................................................................. 5
Solution integration and network configurations ........................................................................................................6
Chapter 3: Solution deployment.................................................................................................... 7
Introduction........................................................................................................................................................................... 7
Deployment prerequisites.................................................................................................................................................. 7
Predeployment configuration........................................................................................................................................... 8
Deploying and configuring a host cluster using Windows Admin Center...............................................................9
Get started.................................................................................................................................................................... 10
Networking..................................................................................................................................................................... 11
Clustering........................................................................................................................................................................11
Storage........................................................................................................................................................................... 12
Postdeployment configuration........................................................................................................................................12
Chapter 4: References................................................................................................................. 14
Dell Technologies documentation.................................................................................................................................. 14
Microsoft documentation.................................................................................................................................................14
Contents
Contents 3
Introduction
This chapter presents the following topics:
Topics:
Document overview
Audience and scope
Known issues
Document overview
This deployment guide provides an overview of Dell EMC Integrated System solutions for creating a Microsoft Azure Stack
hyperconverged infrastructure (HCI) cluster using Microsoft Windows Admin Center, guidance on how to integrate solution
components, and instructions for preparing and deploying the solution infrastructure.
For end-to-end deployment steps, use the information in this guide along with Network Integration and Host Network
Configuration Options.
Audience and scope
The audience for this document includes systems engineers, field consultants, partner engineering team members, and
customers with knowledge about deploying HCIs with the Azure Stack HCI operating system, Hyper-V, and Storage Spaces
Direct.
NOTE: The instructions in this deployment guide are applicable to the Azure Stack HCI operating system only.
Known issues
For a list of known issues, see the Known Issues page.
1
4 Introduction
Solution Overview
This chapter presents the following topics:
Topics:
Solution introduction
Azure Stack HCI deployment models
Solution integration and network configurations
Solution introduction
Dell EMC Integrated System for Microsoft Azure Stack HCI include various configurations of AX nodes from Dell Technologies.
Azure Stack HCI uses a flexible solution architecture with software-defined storage, compute, and networking rather than a
fixed component design. The AX nodes offer a high performance, scalable, and secure foundation to build a software-defined
storage infrastructure.
For information on supported AX nodes and operating system support for each of the AX nodes, see Support Matrix for
Microsoft HCI Solutions.
The solutions are available in both hybrid and all-flash configurations. For more information on available configurations, see AX
nodes specification sheet.
Azure Stack HCI deployment models
Dell EMC Integrated System for Microsoft Azure Stack HCI offers the following types of cluster infrastructure deployments:
Switchless storage networking
Stretched cluster infrastructure
Scalable infrastructure
Switchless storage networking
This variant of Dell EMC Integrated System for Microsoft Azure Stack HCI offers two to four nodes in a switchless
configuration for storage traffic. This infrastructure can be implemented using any of the validated and supported AX nodes.
However, the number of nodes in a cluster varies between the AX node models, based on the number of network adapters that
each model supports.
Switchless storage networking offers two full-mesh configurations:
Single-link
Dual-link
For more information about switchless storage networking deployments, see Dell EMC Microsoft HCI Solutions Deployment
Guide.
Stretched cluster infrastructure
The Microsoft Azure Stack HCI operating system added a new feature to support disaster recovery between two sites using
Azure Stack HCI clusters. With Storage Replica as a foundation, stretched clusters support both synchronous and asynchronous
replication of data between two sites. The replication direction (uni- or bi-directional) can be configured for either an active/
passive or active/active stretched cluster configuration.
2
Solution Overview 5
NOTE: Stretched clustering infrastructure is only validated with manual deployment of Azure Stack HCI clusters. For more
information about stretched cluster infrastructure deployments, see Dell EMC Integrated System for Microsoft Azure Stack
HCI: Stretched Cluster Deployment Reference Architecture Guide.
Scalable infrastructure
The scalable offering within Dell EMC Integrated System for Microsoft Azure Stack HCI encompasses various configurations of
AX nodes. In this Azure Stack HCI solution, as many as 16 AX nodes power the primary compute cluster. It includes the Azure
Stack HCI cluster, redundant top-of-rack (ToR) switches, a separate out-of-band (OOB) network, and an existing management
infrastructure in the data center.
Solution integration and network configurations
Dell Technologies recommends the following network configurations for management, storage, and compute/VM traffic when
creating clusters using Windows Admin Center:
Table 1. Recommended network configurations
Options Minimum number of physical
network ports
Number of virtual switches
Managementa SET + Compute SET +
Physical Storage
6 2
Management SET + Storageb SET +
Compute SET
6 3
Management SET + Storage &
Computec SET
4 2
a. Dell Technologies recommends using 1 GbE, 10 GbE, or 25 GbE nDC/OCP network ports for management traffic.
b. Dell Technologies recommends using 25 GbE RDMA or 100 GbE RDMA for storage traffic.
c. Dell Technologies recommends using 10 GbE or 25 GbE rNDC/OCP/RDMA network ports for compute traffic.
Table 2. Data Center Bridging (DCB) settings
Network card on node Fully converged switch topology Nonconverged switch topology
Mellanox (RoCE) DCB (required) DCB (required for storage adapters only)
QLogic (iWARP) DCB (required for All-NVMe
configurations only)
No DCB
For more information about network configurations for Azure Stack HCI clusters using Windows Admin Center, see Network
Integration and Host Network Configuration Options.
NOTE: You can only achieve Management SET with two physical network adapters ports.
NOTE: Dell Technologies recommends that you use more than a single physical adapter port for management.
6 Solution Overview
Solution deployment
This chapter presents the following topics:
Topics:
Introduction
Deployment prerequisites
Predeployment configuration
Deploying and configuring a host cluster using Windows Admin Center
Postdeployment configuration
Introduction
Dell EMC Integrated System for Microsoft Azure Stack HCI can be deployed in the following ways:
Manual operating system deployment—Begin by manually installing the operating system on AX nodes and proceed with the
solution deployment as described in the Dell EMC Microsoft HCI Solutions Deployment Guide.
By using Windows Admin Center with the Dell EMC OpenManage Integration with Microsoft Windows Admin Center
(OMIMSWAC) extension—Create a cluster with the Azure Stack HCI operating system using the Windows Admin Center
cluster creation wizard.
NOTE: Instructions in this deployment guide are applicable only to the Microsoft Azure Stack HCI operating system.
NOTE: This deployment guide covers Azure Stack HCI cluster creation with all servers in one site. To create a stretched
cluster, see Dell EMC Integrated System for Microsoft Azure Stack HCI: Stretched Cluster Deployment Reference
Architecture Guide.
NOTE: Some of the post-deployment tasks in this guide require running one or more PowerShell commands. Run these
commands to complete the deployment tasks for a fully functional Azure Stack HCI cluster.
Deployment prerequisites
Dell Technologies assumes that the management services that are required for the operating system deployment and cluster
configuration are in the existing infrastructure where the Azure Stack HCI cluster is being deployed.
The following table describes the management services:
Table 3. Management services
Management service Purpose Required/optional
Active Directory User-authentication Required
Domain Name System Name resolution Required
Dynamic Host Configuration Protocol
(DHCP)
IP address assignment Optional
3
Solution deployment 7
Predeployment configuration
Complete the following predeployment configurations before deploying the Azure Stack HCI solution:
Installing Windows Admin Center and OMIMSWAC
Windows Admin Center build 2103 or higher is available from the Microsoft Download Center (download will start if you click
this link) and installable on any computer or VM running Windows 10, Windows Server 2016, Windows Server 2019, or Windows
Server version 1709. Windows Admin Center may also be installed directly on a managed node to manage itself. It is possible to
implement high availability for Windows Admin Center by using failover clustering. When Windows Admin Center is deployed on
nodes in a failover cluster, it acts as an active/passive cluster, providing a highly available Windows Admin Center instance.
NOTE: The Windows Admin Center management computer should be on the same Active Directory domain where you
create the cluster or alternatively, on a fully trusted domain.
The Windows Admin Center installer wizard performs the configuration tasks that are required for Windows Admin Center
functionality. These tasks include creating a self-signed certificate and configuring trusted hosts for remote node access.
Optionally, you can supply the certificate thumbprint that is already present in the target node local certificate store. By default,
Windows Admin Center listens on port 443—you can change this port during the installation process.
NOTE: The automatically generated self-signed certificate expires in 60 days. Ensure that you use a certificate authority
(CA)-provided SSL certificate if you intend to use Windows Admin Center in a production environment.
For complete guidance on installing Windows Admin Center on an Azure Stack HCI operating system with desktop experience or
Server Core, see Install Windows Admin Center.
After the installation is complete, you can access Windows Admin Center at https://
managementstationname:<PortNumber> and install the Dell EMC OpenManage Integration with Microsoft Windows
Admin Center (OMIMSWAC) extension. For more information about the installation procedure, see the "Installing Dell EMC
OpenManage Integration with Microsoft Windows Admin Center" section in the Dell EMC OpenManage Integration with
Microsoft Windows Admin Center Installation Guide.
Azure subscription
The Azure Stack HCI solution requires registration with an Azure subscription within 30 days of deployment. Once the Azure
Stack HCI cluster is registered, all the hybrid services that are supported as a part of the integration can be used along with the
hyper-converged infrastructure services. To complete Azure registration, Azure authentication credentials, subscription ID, and
an optional tenant ID are required.
Configuring network switches
Based on the selected network topology from the recommended configurations, configure the top-of-rack (ToR) network
switches to enable storage and VM/management traffic. A standard Storage Spaces Direct deployment requires three basic
types of networks: out-of-band (OOB) management, host management, and storage.
For sample switch configurations, see Sample Network Switch Configuration Files.
For configuration choices and instructions about different network topologies and host network configurations, see Network
Integration and Host Network Configuration Options.
Deploying the Azure Stack HCI operating system
These instructions are for manual deployment of the Azure Stack HCI operating system on AX nodes. Unless specified
otherwise, perform the steps on each physical node in the infrastructure that will be a part of Azure Stack HCI.
Manual operating system deployment
Dell Lifecycle Controller and integrated Dell Remote Access Controller (iDRAC) provide options for operating system
deployment. Options include manual installation or unattended installation by using virtual media and the operating system
deployment feature in Dell Lifecycle Controller.
8Solution deployment
The step-by-step procedure for deploying the operating system is not within the scope of this guide. The remainder of this
guide assumes that:
The Microsoft Azure Stack HCI operating system installation on the physical server is complete.
You have access to the iDRAC virtual console of the physical server.
NOTE: For information about installing the operating system using the iDRAC virtual media feature, see the "Using the
Virtual Media function on iDRAC 6, 7, 8 and 9" Knowledge Base article.
NOTE: The Azure Stack HCI operating system is based on Server Core and does not have the full user interface. For
information about using sconfig in the Azure Stack HCI operating system, see Deploy the Azure Stack HCI operating
system.
Enabling the firewall rule
If the nodes do not have a fully qualified domain name, set the firewall rule for each node to ensure that the firewall port for
Windows Remote Management allows you to add inbound traffic on additional nodes. Setting this rule allows Windows Admin
Center to reach the nodes and add them to the cluster creation wizard.
Run the following PowerShell command as an administrator on each node:
Set-NetFirewallRule -Name WINRM-HTTP-In-TCP-PUBLIC -RemoteAddress Any
NOTE: The recommended best practice is to use the Windows Admin Center IP address as the parameter for
-RemoteAddress.
Enabling CredSSP
Perform the following steps to enable CredSSP on the management station where Windows Admin Center is installed.
1. From a PowerShell window, run gpedit.
2. In the Group Policy Editor window, go to Computer Configurations > Administrative Templates > System >
Credentials Delegation.
3. Select Allow delegating fresh credentials with NTLM-only server authentication and enable it.
4. Under Options, add servers to the list by clicking Show...
5. Add a fully qualified domain name entry that begins with wsman/hostname.mydomain.com (single host entry) or
wsman/*.mydomain.com (wild card entry for all the hosts in this domain).
6. Apply the settings.
7. Run gpupdate /force in the PowerShell window.
Deploying and configuring a host cluster using
Windows Admin Center
After fulfilling the prerequisites to create an Azure Stack HCI cluster that uses Storage Spaces Direct, you can start creating a
host cluster using the Windows Admin Center cluster creation wizard. For more information about this procedure, see Create an
Azure Stack HCI cluster using Windows Admin Center.
Cluster Creation wizard
The Cluster Creation wizard guides you through the rest of the process of creating an Azure Stack HCI cluster. To create a
cluster with all the servers primarily in one site, complete the following pages in the wizard:
1. Get started
2. Networking
3. Clustering
4. Storage
Solution deployment 9
Get started
About this task
Complete the following categories under the Get started page:
Steps
1. Check the prerequisites—Lists the prerequisites on the Windows Admin Center system, servers, and network. If you
comply with them all, click Next.
2. Add servers—Enter the username and password for your administrator account, then enter the computer name, IPv4
address, or fully qualified domain name (FQDN) of each server and click Add. Click Next once the servers are shown to have
been successfully added.
Ensure that either your servers have an FQDN or that you have set the firewall rule to allow inbound traffic to the servers.
3. Join a domain—Specify the active directory domain name, domain username, and password to join the nodes. Enter the
New name field to change the server name after it joins the domain and click Apply changes. Click Next to move to the
next category.
NOTE: If you want to change the node name, ensure that you enter the new node name before performing a
consolidated restart in the subsequent steps.
4. Install features—This step installs features like Hyper-V, failover clustering, data-center-bridging, data-deduplication,
BitLocker drive encryption, and the Active Directory and Hyper-V modules for Windows PowerShell. Click Install features
and click Next once the installation is complete.
5. Install updates—This optional step installs the latest security and quality updates for the operating system. Click Install
updates to install the updates shown, then click Next once the updates are completed.
6. Install hardware updates—Uses OMIMSWAC to update the target nodes with validated firmware, BIOS, and drivers. This
step is recommended to achieve the required performance and support. Click Get updates to start the process.
The Integrated Cluster Deploy and Update (IDU) feature in OMIMSWAC allows you to perform hardware symmetry checks
and update target nodes while creating the cluster. Since this feature is integrated with the Windows cluster creation
workflow, this will help the nodes to reboot only once if necessary, after both Windows and hardware updates are
completed.
NOTE: This feature is only supported on the Azure Stack HCI operating system.
NOTE: Completing the Install hardware updates step is mandatory for enabling RDMA in the later steps.
To complete the hardware updates, perform the following steps:
a. Prerequisites—Review these to ensure that all nodes are ready to perform hardware symmetry checks and updates.
When finished, click HCI configuration profile.
NOTE: If any node is not a valid model or the node does not contain a valid Dell EMC OpenManage Premium License,
you cannot proceed further with hardware symmetry checks. For more information about hardware symmetry, see
Support for Dell EMC OpenManage Integration with Microsoft Windows Admin Center.
b. HCI configuration profile—Review the hardware configuration listed under each category to ensure configuration
across the nodes is the same and that they are supported for an Azure Stack HCI cluster. For more information, see the
Hardware symmetry configurations examples. When finished, click Next: Update Source.
c. Update Source—Generate the compliance report on firmware and drivers by selecting online or offline catalog from
the dropdown list. The Online - Dell EMC Azure Stack HCI Solution Catalog is selected by default. Click Next:
Compliance Report to generate the compliance report. For more information about selecting the offline catalog
and configuring DSU and IC tools in the settings tab, see the "Viewing update compliance and updating the cluster"
section in the Operations Guide (Microsoft HCI Solutions from Dell Technologies: Managing and Monitoring the Solution
Infrastructure Life Cycle Operations Guide).
d. Compliance Report—By default, all the noncompliant upgradable components are selected for an update. Click Next:
Summary.
e. Summary—Review the component selections and click Next: Update. A message prompts you to enable Credential
Security Service Provider (CredSSP).
f. Update—Select Update to start upgrading BIOS, firmware, and driver components.
7. Restart servers—This action is combined for all the updates and features installations that require a restart to be applied.
Ensure you have made the changes that you want to apply and then click Restart servers. When the process is complete,
click Next: Networking.
10 Solution deployment
Networking
About this task
This step allows you to configure network adapters and virtual switches.
Steps
1. Check network adapters—In this step you can enable, disable, include, and exclude network adapters based on your
network topology for management, storage, and compute. Click Refresh if you make changes to the network adapters
outside Windows Admin Center. Click Next to move to the next page.
NOTE: Ensure that you disable or exclude network adapters that are not used in this deployment. The USB NIC is
excluded automatically to avoid interruptions in the subsequent steps.
2. Select the management adapters—It is mandatory to have at least one dedicated physical NIC for management. Any
1 Gb, 10 Gb, or 25 Gb physical adapter is acceptable for management traffic. Select at least two network ports that will
be teamed and dedicated for management traffic. Only static IP address assignments are supported for teamed network
adapters. If one or both adapters have DHCP IP addressing enabled, DHCP IP is converted to a static IP address before
creating a virtual switch. Select from the list of servers and then click Apply and test. Click Yes to any subsequent popup
windows. Once the process is complete, click Next.
NOTE: It is not recommended to use just one physical network adapter for management.
NOTE: The recommended network configurations are shown in the "Solution integration and network configurations"
section.
3. Virtual switch—Use this step to select the virtual switch configuration option for storage and compute/VM traffic. You
can choose to skip this step by ticking the Skip Virtual switch creation box. Choose your preferred configuration from the
list (only the options compatible with your network are shown for selection):
a. Create one virtual switch for storage and compute/VM traffic together—Allows you to choose the adapters
to create a SET for storage and compute/VM traffic. This SET requires at least one RDMA capable physical network
adapter for storage traffic.
b. Create one virtual switch for compute/VM traffic only—Allows you to choose adapters to create a SET for
compute/VM traffic only. Storage traffic will require a separate RDMA-capable physical network adapter.
c. Create two virtual switches—Allows you to choose adapters to create a SET for compute/VM traffic and a SET for
storage traffic. The Compute/VM SET can use any available 10 Gb/25 Gb network ports. The Storage SET requires at
least one RDMA-capable physical network adapter.
Under the Advanced dropdown, choose the virtual switch name and the load-balancing algorithm. It is recommended that
you use the default Hyper-V Port load-balancing algorithm. Click Apply and test. Once the changes have been successfully
applied, click Next: Clustering.
4. RDMA—The wizard verifies whether RDMA is supported in your setup. If it is supported, enable RDMA. There is also an
option to set up Data Center Bridging during this step.
NOTE: Ensure that you have run Install hardware updates in Step 1.6, otherwise RDMA provisioning will not be
available.
5. Define networks—Use this step to configure network adapters for storage and VM traffic. Define static IPs, subnet masks,
valid VLAN IDs, and the default gateway for the network adapters. Also verify that the network adapters of each server
have a unique static IP address, subnet mask, and a valid VLAN ID. This step checks the network connectivity between all
adapters with the same subnet masks and VLAN IDs. Once the changes have been successfully applied, click Next.
NOTE: The status column may show errors that require you to reapply and retest the connections.
Clustering
About this task
Complete the following categories on the Clustering page:
Solution deployment 11
Steps
1. Validate cluster—This step validates whether the servers are suitable for clustering. Click Validate to generate an HTML
report of all the performed validations that includes a summary of these validations. Review this report to check for warnings
as well as passed and failed validations. Make the required changes and validate again until all the checks pass. When ready,
click Next.
NOTE: If the CredSSP prompt displays, select Yes to enable CredSSP for the wizard temporarily. You can disable it
after your cluster is created to increase security.
2. Create cluster—This step creates a cluster and makes it ready for enabling Storage Spaces Direct.
a. Add a cluster name in the Cluster name field (required).
b. Under the Advanced dropdown, select either Use all networks (recommended) or Specify one or more networks
not to use. If you choose the latter, type in the network name and click Add.
c. Under the Advanced dropdown, select either Assign dynamically with DHCP (recommended) or Specify one or
more static addresses. If you choose the latter, type in the IP address and click Add.
d. Click Create cluster. This process may take a few minutes.
3. Click Next: Storage.
Storage
About this task
Complete the following categories on the Storage page:
Steps
1. Clean drives—Cleans the drives in the cluster. Ensure that you clean the drives if you are redeploying Storage Spaces
Direct using these drives. When finished, click Next.
2. Verify drives—Verifies that all the drives are connected and working. Expand the rows to check if all the drives are
appearing. If drives are not listed or are missing, you should ensure that they are connected properly and working. When
finished, click Next.
3. Validate Storage—Validates whether the storage is suitable for Storage Spaces Direct. It produces a report containing the
validation results of each test. Ensure that all the tests have passed successfully. When finished, click Next.
4. Enable Storage Spaces Direct—Keep the cache drive selection at the default setting. Then click Enable. Once Storage
Spaces Direct has been enabled on the cluster, a report is generated. Review this report and verify that all the validations
have completed without errors. Click Next: SDN and then click Skip.
Results
The wizard completion page is displayed.
Postdeployment configuration
Perform the following postdeployment configurations on your nodes:
Enable RDMA for the storage adapters. Run the following command with the relevant parameter:
Enable-NetAdapterRDMA -Name “vSMB1”, “vSMB2”
To help ensure that the active memory dump is captured if a fatal system error occurs, allocate enough space for the
pagefile. It is recommended that you allocate at least 40 GB plus the size of the CSV block cache. Follow these steps to do
so:
1. Determine the cluster CSV block cache size value by running the following command:
$blockCacheMB = (Get-Cluster).BlockCacheSize
2. Run the following command to update the page file settings:
$blockCacheMB = (Get-Cluster).BlockCacheSize
$pageFilePath = "C:\pagefile.sys"
12 Solution deployment
$initialSize = [Math]::Round(40960 + $blockCacheMB)
$maximumSize = [Math]::Round(40960 + $blockCacheMB)
$system = Get-WmiObject -Class Win32_ComputerSystem -EnableAllPrivileges
if ($system.AutomaticManagedPagefile) {
$system.AutomaticManagedPagefile = $false
$system.Put()
}
$currentPageFile = Get-WmiObject -Class Win32_PageFileSetting
if ($currentPageFile.Name -eq $pageFilePath)
{
$currentPageFile.InitialSize = $InitialSize
$currentPageFile.MaximumSize = $MaximumSize
$currentPageFile.Put()
}
else
{
$currentPageFile.Delete()
Set-WmiInstance -Class Win32_PageFileSetting -Arguments @{Name=$pageFilePath;
InitialSize = $initialSize; MaximumSize = $maximumSize}
}
Set the network adapter advanced properties of the Qlogic network adapters to ‘iWarp’ and the properties of Mellanox
network adapters to ‘host-in-charge’. Run the following PowerShell command with the relevant parameter values based on
your network adapter configuration:
For Qlogic network adapters: Get-NetAdapter -InterfaceDescription '*Qlogic*' | Set-
NetAdapterAdvancedProperty -DisplayName 'NetworkDirect Technology' -DisplayValue
'iWarp'
For Mellanox network adapters: Get-NetAdapter -InterfaceDescription '*Mellanox*' | Set-
NetAdapterAdvancedProperty -DisplayName 'NetworkDirect Technology' -DisplayValue
'host-in-charge'
Configure a cluster witness for your cluster. It can be a file share or a cloud-based witness.
NOTE: If you choose to configure a file share witness, it should exist outside the two-node cluster. For information
about configuring a cloud-based witness, see "Deploy a Cloud Witness for a Failover Cluster".
Clusters that are deployed using the Azure Stack HCI operating system must be onboarded to Microsoft Azure for full
functionality and support. For more information, see Connect Azure Stack HCI to Azure.
For management and operations guidance, see Operations Guide—Managing and Monitoring the Solution Infrastructure Life
Cycle.
Solution deployment 13
References
This chapter presents the following topics:
Topics:
Dell Technologies documentation
Microsoft documentation
Dell Technologies documentation
The following links provide additional information from Dell Technologies:
Support Matrix for Microsoft HCI Solutions
Managing and Monitoring the Solution Infrastructure Life Cycle Operations Guide
Microsoft documentation
The following links provide additional information about Azure Stack HCI clusters and Storage Spaces Direct:
Azure Stack HCI deployment overview
Storage Spaces Direct overview
4
14 References
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14

Dell AX-740XD Owner's manual

Type
Owner's manual
This manual is also suitable for

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI