LSI Syncro CS 9361-8i and Syncro CS 9380-8e Solution User guide

Category
Software manuals
Type
User guide
Syncro® CS 9361-8i and Syncro CS 9380-8e Solution
User Guide
Version 2.0
October 2014
55411-00, Rev. B
For a comprehensive list of changes to this document, see the Revision History.
Avago Technologies, the A logo, LSI, and Storage by LSI, Syncro, MegaRAID, MegaRaid Storage Manager, CacheCade,
and CacheVault are trademarks of Avago Technologies in the United States and other countries. All other brand and
product names may be trademarks of their respective companies.
Data subject to change. Copyright © 2014 Avago Technologies. All Rights Reserved.
Corporate Headquarters Email Website
San Jose, CA globalsuppor[email protected] www.lsi.com
800-372-2447
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Table of Contents
Avago Technologies
- 3 -
Table of Contents
Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1 Concepts of High-Availability DAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 HA-DAS Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Syncro CS 9361-8i and Syncro CS 9380-8e Solution Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Hardware Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Overview of Cluster Setup, Planned Failovers, and Firmware Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Performance Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Known Third-Party Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7.1 Non-shared VD is Pulled into Windows Operating System Cluster During Cluster Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7.2 Delayed Write Failed Error During IO Stress Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7.3 Remote IO Failure Observed in SLES11 SP2 While Removing the SAS Cables of the Owner Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 2: Creating the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1 Creating Virtual Drives on the Controller Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Creating Shared or Exclusive VDs with the CTRL-R Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Selecting Additional Virtual Drive Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 Creating Shared or Exclusive VDs with StorCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.4 Creating Shared or Exclusive VDs with MSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Creating the Cluster in Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Prerequisites for Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.2 Creating the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.3 Validating the Failover Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Creating the Cluster in Red Hat Enterprise Linux (RHEL) and CentOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1 Prerequisites for Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.2 Creating the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.3 Configure the Logical Volumes and Apply GFS2 File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.4 Add a Fence Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.5 Create a Failover Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3.6 Add Resources to the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3.7 Create a Quorum Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3.8 Create Service Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.9 Mount the NFS Resource from the Remote Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4 Creating the Cluster in SuSE Linux Enterprise Server (SLES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.1 Prerequisites for Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4.2 Creating the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4.3 Bringing the Cluster Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.4.4 Configuring the NFS Resource with STONITH SBD Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.4.5 Adding NFS Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.4.6 Mounting NFS in the Remote Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Chapter 3: System Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1 High Availability Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2 Understanding Failover Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2.1 Understanding and Using Planned Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.2 Understanding Unplanned Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3 Updating the Syncro CS Controller Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.4 Updating the MegaRAID Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.1 Updating the MegaRAID Driver in Windows Server 2008 R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.2 Updating the MegaRAID Driver in Windows Server 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.4.3 Updating the Red Hat Linux System Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4.4 Updating the SuSE Linux Enterprise Server 11 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.5 Performing Preventive Measures on Disk Drives and VDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Table of Contents
Avago Technologies
- 4 -
Chapter 4: Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.1 Verifying HA-DAS Support in Tools and the OS Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 Confirming SAS Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2.1 Using Crtl-R to View Connections for Controllers, Expanders, and Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2.2 Using StorCLI to Verify Dual-Ported SAS Addresses to Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.2.3 Using MSM to Verify Dual-Ported SAS Addresses to Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.3 Handling Pinned Cache on Both Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4 Error Situations and Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5 Event Messages and Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Avago Technologies
- 5 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 1: Introduction
Concepts of High-Availability DAS
Chapter 1: Introduction
This document explains how to set up high-availability direct-attached storage (HA-DAS) clustering on a Syncro CS
9361-8i and Syncro CS 9380-8e configuration after you configure the hardware and install the operating system.
The Syncro CS solution provides fault tolerance capabilities as a key part of a high-availability data storage system. The
Syncro CS solution combines redundant servers, Avago HA-DAS RAID controllers, computer nodes, cable connections,
common SAS JBOD enclosures, and dual-ported SAS storage devices.
The redundant components and software technologies provide a high-availability system with ongoing service that is
not interrupted by the following events:
The failure of a single internal node does not interrupt service because the solution has multiple nodes with
cluster failover.
An expander failure does not interrupt service because the dual expanders in every enclosure provide redundant
data paths.
A drive failure does not interrupt service because RAID fault tolerance is part of the configuration.
A system storage expansion or maintenance activity can be completed without requiring an interruption of
service because of redundant components, management software, and maintenance procedures.
1.1 Concepts of High-Availability DAS
In terms of data storage and processing, High Availability (HA) means a computer system design that ensures a high
level of operational continuity and data access reliability over a long period of time. High-availability systems are
critical to the success and business needs of small and medium-sized business (SMB) customers, such as retail outlets
and health care offices, who cannot afford to have their computer systems go down. An HA-DAS solution enables
customers to maintain continuous access to and use of their computer system. Shared direct-attached drives are
accessible to multiple servers, thereby maintaining ease of use and reducing storage costs.
A cluster is a group of computers working together to run a common set of applications and to present a single logical
system to the client and application. Failover clustering provides redundancy to the cluster group to maximize up-time
by utilizing fault-tolerant components. In the example of two servers with shared storage that comprise a failover
cluster, when a server fails, the failover cluster automatically moves control of the shared resources to the surviving
server with no interruption of processing. This configuration allows seamless failover capabilities in the event of
planned failover (maintenance mode) for maintenance or upgrade, or in the event of a failure of the CPU, memory, or
other server failures.
The Syncro CS solution is specifically designed to provide HA-DAS capabilities for a class of server chassis that include
two server motherboards in one chassis. This chassis architecture is often called a cluster in a box (CiB).
Because multiple initiators exist in a clustered pair of servers (nodes) with a common shared storage domain, there is a
concept of device reservations in which physical drives, drive groups, and virtual drives (VDs) are managed by a
selected single initiator. For HA-DAS, I/O transactions and RAID management operations are normally processed by a
single Syncro CS 9361-8i controller or Syncro CS 9380-8e controller, and the associated physical drives, drive groups,
and VDs are only visible to that controller. To assure continued operation, all other physical drives, drive groups, and
VDs are also visible to, though not normally controlled by, the Syncro CS controller. This key functionality allows the
Syncro CS 9361-8i and Syncro CS 9380-8e solution to share VDs among multiple initiators as well as exclusively
constrain VD access to a particular initiator without the need for SAS zoning.
Node downtime in an HA system can be either planned and unplanned. Planned node downtime is the result of
management-initiated events, such as upgrades and maintenance. Unplanned node downtime results from events
that are not within the direct control of IT administrators, such as failed software, drivers, or hardware. The Syncro CS
9361-8i and Syncro CS 9380-8e solution protects your data and maintains system up-time from both planned and
unplanned node downtime. Also, it enables you to schedule node downtime to update hardware or firmware, and so
Avago Technologies
- 6 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 1: Introduction
HA-DAS Terminology
on. When you bring one controller node down for scheduled maintenance, the other node takes over with no
interruption of service.
1.2 HA-DAS Terminology
This section defines some additional important HA-DAS terms.
Cache Mirror: A cache coherency term describing the duplication of write-back cached data across two
controllers.
Exclusive Access: A host access policy in which a VD is only exposed to, and accessed by, a single specified server.
Failover: The process in which the management of drive groups and VDs transitions from one controller to the
peer controller to maintain data access and availability.
HA Domain: A type of storage domain that consists of a set of HA controllers, cables, shared disk resources, and
storage media.
Peer Controller: A relative term to describe the HA controller in the HA domain that acts as the failover controller.
Server/Controller Node: A processing entity composed of a single host processor unit or multiple host processor
units that is characterized by having a single instance of a host operating system.
Server Storage Cluster: An HA storage topology in which a common pool of storage devices is shared by two
computer nodes through dedicated Syncro CS 9361-8i and Syncro CS 9380-8e controllers.
Shared Access: A host access policy in which a VD is exposed to, and can be accessed by, all servers in the HA
domain.
Virtual Drive (VD): A storage unit created by a RAID controller from one or more physical drives. Although a
virtual drive can consist of multiple drives, it is seen by the operating system as a single drive. Depending on the
RAID level used, the virtual drive might retain redundant data in case of a drive failure.
1.3 Syncro CS 9361-8i and Syncro CS 9380-8e Solution Features
The Syncro CS 9361-8i and Syncro CS 9380-8e solution supports the following HA features.
Server storage cluster topology, enabled by the following supported operating systems:
Microsoft® Windows Server®2008 R2
Microsoft Windows Server 2008 R2 SP1
Microsoft Windows Server 2012
Microsoft Windows Server 2012 R2
Microsoft Windows Storage Server 2012
Microsoft Windows Storage Server 2012 R2
Red Hat® Enterprise Linux® 6.3
Red Hat Enterprise Linux 6.4
CentOS® 6.5
SuSE® Linux Enterprise Server 11 SP3
SuSE Linux Enterprise Server 11 SP2
Clustering/HA services support:
Microsoft failover clustering
Red Hat High Availability Add-on
SuSE High Availability Extensions
Dual-active HA with shared storage
Avago Technologies
- 7 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 1: Introduction
Hardware Compatibility
Controller-to-controller intercommunication over SAS
Write-back cache coherency
Shared and exclusive VD I/O access policies
Operating system boot from the controller (exclusive access)
Controller hardware and property mismatch detection, handling, and reporting
Global hot spare support for all volumes in the HA domain
Planned and unplanned failover modes
CacheVault® provides cache cached data protection in case of host power loss or server failure
The Auto Enhanced Import feature is enabled by default. This feature offers automatic import of foreign
configurations.
Full MegaRAID® features, with the following exceptions:
T10 Data Integrity Field (DIF) is not supported.
CacheCade® is not supported.
Dimmer switch functionality is not supported.
SGPIO sideband signaling for enclosure management is not supported.
SATA drives are not supported.
SAS drives that do not support SCSI-3 persistent reservations (PR) for the VDs are not supported.
System/JBOD physical drives are not supported (that is, the individual physical drives are not exposed to the
operating system).
Drives that are directly attached to the controller (not through an expander device) are not supported.
Cluster-active reconstruction operations (RAID-Level Migration or Online Capacity Expansion) are not
supported.
Patrol Read operations that were in progress do not resume after failover.
Firmware-level node incompatibility details are not reported for non-premium features.
The Maintain Pd Fail History feature is not supported. This feature, which is available in the WebBIOS utility
and the MegaRAID Command Tool, maintains the history of all drive failures.
Cache memory recovery is not supported for I/O shipped commands. I/O shipping occurs when a cluster
node has a problem in the I/O path, and the I/O from that cluster node is shipped to the other cluster node.
Battery backup units are not supported.
HA-DAS does not support configuration of a global hot spare (GHS) when no VDs exist on the two nodes.
Configuring a GHS when no VDs exist on the two nodes and then rebooting both nodes can cause problems.
1.4 Hardware Compatibility
The servers, disk drives, and optional JBOD enclosures you use in the Syncro CS 9361-8i and Syncro CS 9380-8e
solution must be selected from the list of approved components that Avago has tested for compatibility. Refer to the
web page for the compatibility lists at http://www.lsi.com/channel/support/pages/interoperability.aspx.
Avago Technologies
- 8 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 1: Introduction
Overview of Cluster Setup, Planned Failovers, and Firmware Updates
1.5 Overview of Cluster Setup, Planned Failovers, and Firmware Updates
Chapter 2 explains how to set up HA-DAS clustering on a Syncro CS 9361-8i configuration or on a Syncro CS 9380-8e
configuration after you configure the hardware and install the operating system.
Chapter 3 explains how to perform system administration tasks, such as planned failovers and updates of the Syncro
CS 9361-8i and Syncro CS 9380-8e controller firmware.
Chapter 4 has information about troubleshooting a Syncro CS system.
Refer to the Syncro CS 9361-8i and Syncro CS 9380-8e Controllers User Guide on the Syncro CS Resource CD for
instructions on how to install the Syncro CS controllers and connect them by cable to the CiB enclosure.
1.6 Performance Considerations
SAS technology offers throughput-intensive data transfers and low latency times. Throughput is crucial during
failover periods where the system needs to process reconfiguration activity in a fast, efficient manner. SAS offers a
throughput rate of 124 Gb/s over a single lane. SAS controllers and enclosures typically aggregate 4 lanes into an x4
wide link, giving an available bandwidth of 48 Gb/s across a single connector, which makes SAS ideal for HA
environments.
Syncro CS controllers work together across a shared SAS Fabric to achieve sharing, cache coherency, heartbeat
monitoring and redundancy by using a set of protocols to carry out these functions. At any point in time, a particular
VD is accessed or owned by a single controller. This owned VD is a termed a local VD. The second controller is aware of
the VD on the first controller, but it has only indirect access to the VD. The VD is a remote VD for the second controller.
In a configuration with multiple VDs, the workload is typically balanced across controllers to provide a higher degree
of efficiency.
When a controller requires access to a remote VD, the I/Os are shipped to the remote controller, which processes the
I/O locally. I/O requests that are handled by local VDs are much faster than those handled by remote VDs.
The preferred configuration is for the controller to own the VD that hosts the clustered resource (the MegaRAID
Storage Manager™ utility shows which controller owns this VD). If the controller does not own this VD, it must issue a
request to the peer controller to ship the data to it, which affects performance. This situation can occur if the
configuration has been configured incorrectly or if the system is in a failover situation.
NOTE Performance tip: You can reduce the impact of I/O shipping by
locating the VD or drive groups with the server node that is primarily
driving the I/O load. Avoid drive group configurations with multiple
VDs whose I/O load is split between the server nodes.
MSM has no visibility to remote VDs, so all VD management operations must be performed locally. A controller that
has no direct access to a VD must use I/O shipping to access the data if it receives a client data request. Accessing the
remote VD affects performance because of the I/O shipping overhead.
Performance tip: Use the MSM utility to verify correct resource ownership and load balancing. Load balancing is a
method of spreading work between two or more computers, network links, CPUs, drives, or other resources. Load
balancing is used to maximize resource use, throughput, or response time. Load balancing is the key to ensuring that
client requests are handled in a timely, efficient manner.
Avago Technologies
- 9 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 1: Introduction
Known Third-Party Issues
1.7 Known Third-Party Issues
The following subsections describe known third-party issues and where to find the information needed to solve these
issues.
1.7.1 Non-shared VD is Pulled into Windows Operating System Cluster During Cluster Creation
Refer to Microsoft Knowledge Base article at http://support.microsoft.com/kb/2813005.
1.7.2 Delayed Write Failed Error During IO Stress Test
Install the Microsoft fix in case of a delayed Write Failed error when an I/O stress test runs against a Windows Server
2012 failover cluster from a Windows 8-based client or from a Windows Server 2012-based client.
Refer to Microsoft Knowledge Base article at http://support.microsoft.com/kb/2842111.
1.7.3 Remote IO Failure Observed in SLES11 SP2 While Removing the SAS Cables of the Owner
Node
The IO activity is failing and the resources take more time to migrate to the other node. The solution is to restart IO
from the client.
Avago Technologies
- 10 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
Chapter 2: Creating the Cluster
This chapter explains how to set up HA-DAS clustering on a Syncro CS 9361-8i configuration or on a Syncro CS 9380-8e
configuration after you configure the hardware and install the operating system.
2.1 Creating Virtual Drives on the Controller Nodes
The next step is creating VDs on the disk drives.
The HA-DAS cluster configuration requires a minimum of one shared VD to be used as a quorum disk to enable
operating system support for clusters. Refer to the MegaRAID SAS Software User Guide for information about the
available RAID levels and the advantages of each one.
As explained in the instructions in the following sections, VDs created for storage in an HA-DAS configuration must be
shared. If you do not designate them as shared, the VDs are visible only from the controller node from which they
were created.
You can use the Ctrl-R pre-boot utility to create the VDs. You can also use the Avago MegaRAID Storage Manager
(MSM) utility or the StorCLI utility to create VDs after the OS has booted. Refer to the MegaRAID SAS Software User
Guide for complete instructions on using these utilities.
2.1.1 Creating Shared or Exclusive VDs with the CTRL-R Utility
To coordinate the configuration of the two controller nodes, both nodes must be booted into the Ctrl-R pre-boot
utility. The two nodes in the cluster system boot simultaneously after power on, so you must rapidly access both
consoles. One of the systems is used to create the VDs; the other system simply remains in the pre-boot utility. This
approach keeps the second system in a state that does not fail over while the VDs are being created on the first
system.
NOTE The CTRL-R utility cannot see boot sectors on the disks. Therefore, be
careful not to select the boot disk for a VD. Preferably, unshare the boot
disk before doing any configuration with the pre-boot utility. To do
this, select Logical Drive Properties and deselect the Shared Virtual
Disk property.
You can use the Ctrl-R Utility to configure RAID drive groups and virtual drives to create storage configurations on
systems with Avago SAS controllers.
NOTE You cannot create blocked VDs. If you try to create a blocked VD, the
operation is rejected with a generic message that the operation is not
supported.
1. When prompted during the POST on the two systems, press and hold the Ctrl key, and press the R key to access
the Ctrl-R pre-boot BIOS utility (on both systems) when the following text appears:
Copyright© LSI Corporation
Press <Ctrl><R> for Ctrl-R
Respond quickly, because the system boot times are very similar and the time-out period is short. When both
controller nodes are running the Ctrl-R utility, follow these steps to create RAID drive groups.
The VD Mgmt menu is the first menu screen that appears when you start the Ctrl-R Utility, as shown in the
following figure.
This screen shows information on the configuration of controllers, drive groups, and virtual drives. The right panel
of the screen shows attributes of the selected device.
Avago Technologies
- 11 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
Figure 1 VD Mgmt Screen
2. In the VD Mgmt screen, navigate to the controller and press the F2 key.
3. Press Enter.
The Create Virtual Drive screen appears, as shown in the following figure.
NOTE You can use the Create Virtual Drive dialog to create virtual drives for
Unconfigured Good drives. To create virtual drives for existing drive
groups, navigate to a drive group and press the F2 key to view the Add
New VD dialog. The fields in the Add New VD dialog are the same as
in the Create Virtual Drive dialog.
Figure 2 Create a New Virtual Drive
4. Select a RAID level for the drive group from the RAID Level field.
5. Enable the Data Protection field if you want to use the data protection feature on the newly created virtual drive.
The Data Protection field is enabled only if the controller has data protection physical drives connected to it.
Avago Technologies
- 12 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
NOTE If you use more than 32 Full Disk Encryption (FDE) drives when you
create secure VDs, failover might not function for some VDs. Hence, it
is best to use a maximum of 32 FDE drives when you create secure
configurations.
6. You can change the sequence of the physical drives in the Drives box. All of the available unconfigured good
drives appear in the Drives box. Press the spacebar to select the physical drives in the sequence that you prefer.
Based on your selection, the sequence number appears in the # column.
7. You can enter a size lesser than the maximum size of the drive group, if you want to create other virtual drives on
the same drive group. The maximum size of the drive group appears in the Size field. The size entered can be in
MB, GB, or TB and should be mentioned only in uppercase. Before entering a size, ensure that you have deleted
the previous default value by using the Backspace key.
8. Enter a name for the virtual drive in the Name field. The name given to the virtual drive cannot exceed 15
characters.
You may press the Advanced button to set additional properties for the newly created virtual drive. For more
information, see Section 2.1.2, Selecting Additional Virtual Drive Properties.
9. Press OK.
A dialog appears, asking you whether you want to initialize the virtual drive you just created.
12. Select the ID for the virtual drive, and press F2.
The Virtual Drive- Properties menu appears, as shown in the following screen.
Figure 3 Virtual Drive – Properties Menu
10. Click Properties on the menu.
The Virtual Drive - Properties dialog box appears, as shown in the following figure.
Avago Technologies
- 13 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
Figure 4 Virtual Drive - Properties Dialog Box
11. Use the arrow keys to select Advanced and press Enter.
The Advanced Properties dialog box appears, as shown in the following figure.
Figure 5 Advanced Features Dialog Box
12. Make sure the Provide shared access check box is checked to enable High Availability DAS.
The Provide shared access option enables a shared VD that both controller nodes can access. If you uncheck this
box, the VD has a status of Exclusive, and only the controller node that created this VD can access it. You can use
the exclusive VD as a boot volume for this cluster node.
13. Repeat the previous steps to create the other VDs.
As the VDs are configured on the first controller node, the drive listing on the other controller node is updated to
reflect the use of the drives.
14. Select Initialize, and press OK.
The new virtual drive is created and initialized.
Avago Technologies
- 14 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
15. Define hot spare disks for the VDs to maximize the level of data protection.
NOTE The Syncro CS 9361-8i and Syncro CS 9380-8e solution supports global
hot spares and dedicated hot spares. Global hot spares are global for
the cluster, not for a controller.
16. When all VDs are configured, reboot both systems as a cluster.
2.1.2 Selecting Additional Virtual Drive Properties
This section describes the following additional virtual drive properties that you can select while you create virtual
drives. Change these parameters only if you have a specific reason for doing so. It is usually best to keep them at their
default settings.
Strip Size – The strip size is the portion of the stripe that resides on a single virtual drive in the drive group. Strip
sizes of 64 KB, 128 KB, 256 KB, 512 KB, or 1 MB are supported.
Read Policy – Specify one of the following options to specify the read policy for this virtual drive:
Normal – Read ahead capability lets the controller read sequentially ahead of requested data and to store
the additional data in cache memory, thereby anticipating that the data will be needed soon. This process
speeds up reads for sequential data, but there is little improvement when the computer accesses random
data.
Ahead – Disables the read ahead capability.
Write Policy – Select one of the following options to specify the write policy for this virtual drive
Write Thru – In this mode, the controller sends a data transfer completion signal to the host when the drive
subsystem has received all the data in a transaction. This option eliminates the risk of losing cached data in
case of a power failure.
Write Back – In this mode, the controller sends a data transfer completion signal to the host when the
controller cache has received all the data in a transaction.
Write Back with BBU – In this mode the controller has no BBU or the BBU is bad. If you do not choose this
option, the controller firmware automatically switches to the Write Thru mode if it detects a bad or missing
BBU.
CAUTION The write policy depends on the status of the BBU. If the BBU is not
present, is low, is failed, or is being charged, the virtual drive is still in
the Write Back mode and there is a chance of data loss.
I/O Policy – The I/O policy applies to reads on a specific virtual drive. It does not affect the read ahead cache.
Cached – In this mode, all reads are buffered in cache memory. Cached I/O provides faster processing.
Direct – In this mode, reads are not buffered in cache memory. Data is transferred to the cache and the host
concurrently. If the same data block is read again, it comes from cache memory. Direct I/O makes sure that
the cache and the host contain the same data.
Disk cache policy – Select a cache setting for this virtual drive:
Enable – Enable the drive cache.
Disable – Disable the drive cache.
Unchanged – Updating the drive cache policy to Unchanged may enable /disable the drive cache based on
the WCE (Write Cache Policy) bit of the save mode page of the drive.
Initialize – Select to initialize the virtual drive. Initialization prepares the storage medium for use. Fast
initialization will be performed on the virtual drive.
Configure Hot Spare – Select to configure physical drives as hot spares for the newly created virtual drive.
This option is enabled only if there are additional drives and if they are eligible to be configured as hot spares. This
option is not applicable for RAID 0. If you select this option and after the Virtual drive is created, a dialog appears.
The dialog asks you to choose the physical drives that you want to configure as hot spares.
Avago Technologies
- 15 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
2.1.3 Creating Shared or Exclusive VDs with StorCLI
StorCLI is a command-line-driven utility used to create and manage VDs. StorCLI can run in any directory on the
server. The following procedure assumes that a current copy of the 64-bit version of StorCLI is located on the server in
a common directory as the StorCLI executable and the commands are run with administrator privileges.
1. At the command prompt, run the following command:
storcli /c0/vall show
The c0 parameter presumes that there is only one Syncro CS 9361-8i and Syncro CS 9380-8e controller in the
system or that these steps reference the first Syncro CS 9361-8i and Syncro CS 9380-8e controller in a system with
multiple controllers.
The following figure shows some sample configuration information that appears in response to the command.
Figure 6 Sample Configuration Information
The command generates many lines of information that scroll down in the window. You need to use some of this
information to create the shared VD.
2. Find the Device ID for the JBOD enclosure for the system and the Device IDs of the available physical drives for the
VD you will create.
In the second table in the preceding figure, the enclosure device ID of 252 appears under the heading EID, and
the device ID of 0 appears under the heading DID. Use the scroll bar to find the device IDs for the other physical
drives for the VD.
Detailed drive information, such as the drive group, capacity, and sector size, follows the device ID in the table
and is explained in the text below the table.
3. Create the shared VD using the enclosure and drive device IDs with the following command line syntax:
Storcli /c0 add vd rX drives=e:s
The HA-DAS version of StorCLI creates, by default, a shared VD that is visible to all cluster nodes.
Avago Technologies
- 16 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
The following notes explain the command line parameters.
The /c0 parameter selects the first Syncro CS 9361-8i and Syncro CS 9380-8e controller in the system.
The add vd parameter configures and adds a VD (logical disk).
The rX parameter selects the RAID level, where X is the level.
The opening and closing square brackets define the list of drives for the VD. Each drive is listed in the form
enclosure device ID: [slot]drive device ID.
NOTE To create a VD that is visible only to the node that created it (such as
creating a boot volume for this cluster node), add the
[ExclusiveAccess] parameter to the command line.
NOTE For the Access Policy, RW (Read/Write) is the default setting. You
cannot select B (blocked, which does not allow access) as the Access
Policy. If you try to select B, the operation is rejected with the message
that this operation is not supported.
For more information about StorCLI command line parameters, refer to the MegaRAID SAS Software User Guide.
2.1.4 Creating Shared or Exclusive VDs with MSM
Follow these steps to create VDs for data storage with MSM. When you create the VDs, you assign the Share Virtual
Drive property to them to make them visible from both controller nodes. This example assumes you are creating a
RAID 5 redundant VD. Modify the instructions as needed for other RAID levels.
NOTE Not all versions of MSM support HA-DAS. Check the release notes to
determine if your version of MSM supports HA-DAS. Also, see
Section 4.1, Verifying HA-DAS Support in Tools and the OS Driver.
1. In the left panel of the MSM Logical pane, right-click the Syncro CS 9361-8i and Syncro CS 9380-8e controller and
select Create Virtual Drive from the pop-up menu.
The Create Virtual Drive wizard appears.
2. Select the Advanced option and click Next.
Avago Technologies
- 17 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
3. In the next wizard screen, select RAID 5 as the RAID level, and select unconfigured drives for the VD, as shown in
the following figure.
Figure 7 Drive Group Settings
4. Click Add to add the VD to the drive group.
The selected drives appear in the Drive groups window on the right.
5. Click Create Drive Group. Then click Next to continue to the next window.
The Virtual Drive Settings window appears.
6. Enter a name for the VD.
7. Select Always Write Back as the Write policy option, and select other VD settings as required.
NOTE For the Access Policy, Read Write is the default setting. You cannot
select Blocked (does not allow access) as the Access Policy. If you try
to select Blocked, the operation is rejected with the message that this
operation is not supported.
Avago Technologies
- 18 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
8. Select the Provide Shared Access option, as shown in the following figure.
NOTE If you do not select Provide Shared Access, the VD is visible only from
the server node on which it is created. Leave this option unselected if
you are creating a boot volume for this cluster node.
Figure 8 Provide Shared Access Option
9. Click Create Virtual Drive to create the virtual drive with the settings you specified.
The new VD appears in the Drive groups window on the right of the window.
10. Click Next to continue.
Avago Technologies
- 19 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
The Create Virtual Drive Summary window appears, as shown in the following figure.
Figure 9 Create Virtual Drive Summary
11. Click Finish to complete the VD creation process.
12. Click OK when the Create Virtual Drive - complete message appears.
2.1.4.1 Unsupported Drives
Drives that are used in the Syncro CS 9361-8i and Syncro CS 9380-8e solution must selected from the list of approved
drives listed on the LSI website (see the URL in Section 1.4, Hardware Compatibility). If the MegaRAID Storage
Manager (MSM) utility finds a drive that does not meet this requirement, it marks the drive as Unsupported, as shown
in the following figure.
Figure 10 Unsupported Drive in MSM
Avago Technologies
- 20 -
Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide
October 2014
Chapter 2: Creating the Cluster
Creating the Cluster in Windows
2.2 Creating the Cluster in Windows
The following subsections describe how to enable cluster support, and how to enable and validate the failover
configuration while running a Windows operating system.
2.2.1 Prerequisites for Cluster Setup
2.2.1.1 Clustered RAID Controller Support
Support for clustered RAID controllers is not enabled by default in Microsoft Windows Server 2012 or Microsoft
Windows Server 2008 R2.
To enable support for this feature, please consult with your server vendor. For additional information, visit the Cluster
in a Box Validation Kit for Windows Server site on the Microsoft Windows Server TechCenter website for Knowledge
Base (KB) article 2839292 on enabling this support.
2.2.1.2 Enable Failover Clustering
The Microsoft Server 2012 operating system installation does not enable the clustering feature by default. Follow
these steps to view the system settings, and, if necessary, to enable clustering.
1. From the desktop, launch Server Manager.
2. Click Manage and select Add Roles and Features.
3. If the Introduction box is enabled (and appears), click Next.
4. In the Select Installation Type box, select Role Based or Feature Based.
5. In the Select Destination Server box, select the system and click Next.
6. In the Select Server Roles list, click Next to present the Features list.
7. Make sure that failover clustering is installed, including the tools. If necessary, run the Add Roles and Features
wizard to install the features dynamically from this user interface.
8. If the cluster nodes need to support I/O as iSCSI targets, expand File and Storage Services, File Services and
check for iSCSI Target Server and Server for NFS.
During creation of the cluster, Windows automatically defines and creates the quorum, a configuration database that
contains metadata required for the operation of the cluster. To create a shared VD for the quorum, see the instructions
in Section 2.1, Creating Virtual Drives on the Controller Nodes.
NOTE The best practice is to create a small redundant VD for the quorum. A
size of 500 MB is adequate for this purpose.
To determine if the cluster is active, run MSM and look at the Dashboard tab for the controller. The first of two nodes
that boots shows the cluster status as Inactive until the second node is running and the MSM dashboard on the first
node has been refreshed.
NOTE To refresh the MSM dashboard, press F5 or select Manage > Refresh
on the menu.
The following figure shows the controller dashboard with Active peer controller status.
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68
  • Page 69 69
  • Page 70 70
  • Page 71 71
  • Page 72 72
  • Page 73 73
  • Page 74 74
  • Page 75 75
  • Page 76 76
  • Page 77 77
  • Page 78 78
  • Page 79 79
  • Page 80 80
  • Page 81 81

LSI Syncro CS 9361-8i and Syncro CS 9380-8e Solution User guide

Category
Software manuals
Type
User guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI