LSI Syncro CS 9271-8i Solution User guide

Category
Software manuals
Type
User guide
DB15-001053-01
Syncro® CS 9271-8i Solution
User Guide
Version 2.0
October 2013
DB15-001053-01
LSI, the LSI & Design logo, Storage.Networking.Accelerated., Syncro, CacheCade, CacheVault, and MegaRAID are trademarks or registered trademarks of LSI Corporation in the United States and/or other
countries. All other brand and product names may be trademarks of their respective companies.
PCI Express and PCIe are registered trademarks of PCI-SIG.
LSI Corporation reserves the right to make changes to the product(s) or information disclosed herein at any time without notice. LSI Corporation does not assume any responsibility or liability arising out of
the application or use of any product or service described herein, except as expressly agreed to in writing by LSI Corporation; nor does the purchase, lease, or use of a product or service from LSI Corporation
convey a license under any patent rights, copyrights, trademark rights, or any other of the intellectual property rights of LSI Corporation or of third parties. LSI products are not intended for use in life-support
appliances, devices, or systems. Use of any LSI product in such applications without written consent of the appropriate LSI officer is prohibited.This document contains proprietary information of LSI
Corporation. The information contained herein is not to be used by or disclosed to third parties without the express written permission of LSI Corporation.
Corporate Headquarters Website
San Jose, CA www.lsi.com
800-372-2447
Document Number: DB15-001053-01
Copyright © 2013 LSI Corporation
All Rights Reserved
Syncro CS 9271-8i Solution User Guide
October 2013
Revision History
Version and Date Description of Changes
Version 2.0, October 2013 Added sections for Linux® operating system support.
Version 1.0, March 2013 Initial release of this document.
Table of Contents
Syncro CS 9271-8i Solution User Guide
October 2013
Table of Contents
LSI Corporation
- 3 -
Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1 Concepts of High-Availability DAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 HA-DAS Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Syncro CS 9271-8i Solution Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Hardware Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Overview of the Hardware Installation, Cluster Setup, and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Performance Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 2: Hardware and Software Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Syncro CS Cluster-in-a-Box Hardware Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Syncro CS Cluster-in-a-Box Software Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Chapter 3: Creating the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Creating Virtual Drives on the Controller Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Creating Shared or Exclusive VDs with the WebBIOS Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.2 Creating Shared or Exclusive VDs with StorCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.3 Creating Shared or Exclusive VDs with MSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.3.1 Unsupported Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2 HA-DAS CacheCade Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Creating the Cluster in Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1 Prerequisites for Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1.1 Clustered RAID Controller Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1.2 Enable Failover Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1.3 Configure Network Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.2 Creating the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.3 Validating the Failover Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Creating the Cluster in Red Hat Enterprise Linux (RHEL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.1 Prerequisites for Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.1.1 Configure Network Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.1.2 Install and Configure the High Availability Add-On Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.1.3 Configure SELinux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.2 Creating the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.3 Configure the Logical Volumes and Apply GFS2 File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4.4 Add a Fence Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4.5 Create a Failover Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.6 Add Resources to the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.7 Create Service Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4.8 Create a Disk Quorum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.9 Edit the Cluster Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4.10 Mount the NFS Resource from the Remote Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Creating the Cluster in SuSE Linux Enterprise Server (SLES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.1 Prerequisites for Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.1.1 Prepare the Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.1.2 Configure Network Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.5.1.3 Connect to the NTP Server for Time Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5.2 Creating the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5.2.1 Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5.3 Bringing the Cluster Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Syncro CS 9271-8i Solution User Guide
October 2013
Table of Contents
LSI Corporation
- 4 -
3.5.4 Configuring the NFS Resource with STONITH SBD Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5.4.1 Install NFSSERVER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5.4.2 Configure the Partition and the File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5.4.3 Configure stonith_sbd Fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5.5 Adding NFS Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.6 Mounting NFS in the Remote Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Chapter 4: System Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.1 High Availability Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2 Understanding Failover Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1 Understanding and Using Planned Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2.1.1 Planned Failover in Windows Server 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2.1.2 Planned Failover in Windows Server 2008 R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.1.3 Planned Failover in Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.1.4 Planned Failover in SuSE Linux Enterprise Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2.2 Understanding Unplanned Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.3 Updating the Syncro CS 9271-8i Controller Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4 Updating the MegaRAID Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4.1 Updating the MegaRAID Driver in Windows Server 2008 R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4.2 Updating the MegaRAID Driver in Windows Server 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.3 Updating the Red Hat Linux System Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.4 Updating the SuSE Linux Enterprise Server 11 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.5 Performing Preventive Measures on Disk Drives and VDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Chapter 5: Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.1 Verifying HA-DAS Support in Tools and the OS Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Confirming SAS Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2.1 Using WebBIOS to View Connections for Controllers, Expanders, and Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2.2 Using WebBIOS to Verify Dual-Ported SAS Addresses to Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2.3 Using StorCLI to Verify Dual-Ported SAS Addresses to Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.2.4 Using MSM to Verify Dual-Ported SAS Addresses to Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3 Understanding CacheCade Behavior During a Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 Error Situations and Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.5 Event Messages and Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
LSI Corporation
- 5 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 1: Introduction
Concepts of High-Availability DAS
Chapter 1: Introduction
This document explains how to set up and configure the hardware and software for the Syncro® CS 9271-8i high-
availability direct-attached storage (HA-DAS) solution.
The Syncro CS 9271-8i solution provides fault tolerance capabilities as a key part of a high-availability data storage
system. The Syncro CS 9271-8i solution combines redundant servers, LSI® HA-DAS RAID controllers, computer nodes,
cable connections, common SAS JBOD enclosures, and dual-ported SAS storage devices.
The redundant components and software technologies provide a high-availability system with ongoing service that is
not interrupted by the following events:
The failure of a single internal node does not interrupt service because the solution has multiple nodes with
cluster failover.
An expander failure does not interrupt service because the dual expanders in every enclosure provide redundant
data paths.
A drive failure does not interrupt service because RAID fault tolerance is part of the configuration.
A system storage expansion or maintenance activity can be completed without requiring an interruption of
service because of redundant components, management software, and maintenance procedures.
1.1 Concepts of High-Availability DAS
In terms of data storage and processing, High Availability (HA) means a computer system design that ensures a high
level of operational continuity and data access reliability over a long period of time. High-availability systems are
critical to the success and business needs of small and medium-sized business (SMB) customers, such as retail outlets
and health care offices, who cannot afford to have their computer systems go down. An HA-DAS solution enables
customers to maintain continuous access to and use of their computer system. Shared direct-attached drives are
accessible to multiple servers, thereby maintaining ease of use and reducing storage costs.
A cluster is a group of computers working together to run a common set of applications and to present a single logical
system to the client and application. Failover clustering provides redundancy to the cluster group to maximize up-time
by utilizing fault-tolerant components. In the example of two servers with shared storage that comprise a failover
cluster, when a server fails, the failover cluster automatically moves control of the shared resources to the surviving
server with no interruption of processing. This configuration allows seamless failover capabilities in the event of
planned failover (maintenance mode) for maintenance or upgrade, or in the event of a failure of the CPU, memory, or
other server failures.
The Syncro CS 9271-8i solution is specifically designed to provide HA-DAS capabilities for a class of server chassis that
include two server motherboards in one chassis. This chassis architecture is often called a cluster in a box (CiB).
Because multiple initiators exist in a clustered pair of servers (nodes) with a common shared storage domain, there is a
concept of device reservations in which physical drives, drive groups, and virtual drives (VDs) are managed by a
selected single initiator. For HA-DAS, I/O transactions and RAID management operations are normally processed by a
single Syncro CS 9271-8i controller, and the associated physical drives, drive groups, and VDs are only visible to that
controller. To assure continued operation, all other physical drives, drive groups, and VDs are also visible to, though
not normally controlled by, the Syncro CS controller. This key functionality allows the Syncro CS 9271-8i solution to
share VDs among multiple initiators as well as exclusively constrain VD access to a particular initiator without the need
for SAS zoning.
LSI Corporation
- 6 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 1: Introduction
HA-DAS Terminology
Node downtime in an HA system can be either planned and unplanned. Planned node downtime is the result of
management-initiated events, such as upgrades and maintenance. Unplanned node downtime results from events
that are not within the direct control of IT administrators, such as failed software, drivers, or hardware. The Syncro CS
9271-8i solution protects your data and maintains system up-time from both planned and unplanned node
downtime. It also enables you to schedule node downtime to update hardware or firmware, and so on. When you
bring one controller node down for scheduled maintenance, the other node takes over with no interruption of
service.
1.2 HA-DAS Terminology
This section defines some additional important HA-DAS terms.
Cache Mirror: A cache coherency term describing the duplication of write-back cached data across two
controllers.
Exclusive Access: A host access policy in which a VD is only exposed to, and accessed by, a single specified server.
Failover: The process in which the management of drive groups and VDs transitions from one controller to the
peer controller to maintain data access and availability.
HA Domain: A type of storage domain that consists of a set of HA controllers, cables, shared disk resources, and
storage media.
Peer Controller: A relative term to describe the HA controller in the HA domain that acts as the failover controller.
Server/Controller Node: A processing entity composed of a single host processor unit or multiple host processor
units that is characterized by having a single instance of a host operating system.
Server Storage Cluster: An HA storage topology in which a common pool of storage devices is shared by two
computer nodes through dedicated Syncro CS 9271-8i controllers.
Shared Access: A host access policy in which a VD is exposed to, and can be accessed by, all servers in the HA
domain.
Virtual Drive (VD): A storage unit created by a RAID controller from one or more physical drives. Although a
virtual drive can consist of multiple drives, it is seen by the operating system as a single drive. Depending on the
RAID level used, the virtual drive might retain redundant data in case of a drive failure.
1.3 Syncro CS 9271-8i Solution Features
The Syncro CS 9271-8i solution supports the following HA features.
Server storage cluster topology, enabled by the following supported operating systems:
Microsoft® Windows Server®2008 R2
Microsoft Windows Server 2012
Red Hat® Enterprise Linux 6.3
Red Hat Enterprise Linux 6.4
SuSE® Linux Enterprise Server 11 SP3
SuSE Linux Enterprise Server 11 SP2
Clustering/HA services support:
Microsoft failover clustering
Red Hat High Availability Add-on
SuSE High Availability Extensions
Dual-active HA with shared storage
Controller-to-controller intercommunication over SAS
LSI Corporation
- 7 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 1: Introduction
Hardware Compatibility
Write-back cache coherency
CacheCade 1.0 (Read)
Shared and exclusive VD I/O access policies
Operating system boot from the controller (exclusive access)
Controller hardware and property mismatch detection, handling, and reporting
Global hot spare support for all volumes in the HA domain
Planned and unplanned failover modes
CacheVault® provides cache cached data protection in case of host power loss or server failure
Full MegaRAID® features, with the following exceptions.
T10 Data Integrity Field (DIF) is not supported.
Self-encrypting drives (SED) and full disk encryption (FDE) are not supported.
CacheCade 2.0 (write back) is not supported.
Dimmer switch is not supported.
SGPIO sideband signaling for enclosure management is not supported.
1.4 Hardware Compatibility
The servers, disk drives, and optional JBOD enclosures you use in the Syncro CS 9271-8i solution must be selected
from the list of approved components that LSI has tested for compatibility. Refer to the web page for the compatibility
lists at http://www.lsi.com/channel/support/pages/interoperability.aspx.
1.5 Overview of the Hardware Installation, Cluster Setup, and Configuration
Chapters 2 and 3 describe how to install the hardware and software so that you can use the fault tolerance capabilities
of HA-DAS to provide continuous service in event of drive failure or server failure and expand the system storage
Chapter 2 describes how to install the Syncro CS 9271-8i controllers and connect them by cable to the CiB enclosure.
In addition, it lists the steps required after controller installation and cable connection, which include the following:
Configure the drive groups and the virtual drives on the two controllers
Install the operating system driver on both server nodes
Install the operating system on both server nodes, following the instructions from the manufacturer
Install StorCLI and MegaRAID Storage Manager™ utilities
Chapter 3 describes how to perform the following actions while using a supported OS:
Install and enable the Cluster feature on both servers.
Set up a cluster under the supported operating systems
Configure drive groups and virtual drives
Create a CacheCade® 1.0 virtual drive as part of a Syncro CS 9271-8i configuration
LSI Corporation
- 8 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 1: Introduction
Performance Considerations
1.6 Performance Considerations
SAS technology offers throughput-intensive data transfers and low latency times. Throughput is crucial during
failover periods where the system needs to process reconfiguration activity in a fast, efficient manner. SAS offers a
throughput rate of 124 Gb/s over a single lane. SAS controllers and enclosures typically aggregate 4 lanes into an x4
wide link, giving an available bandwidth of 48 Gb/s across a single connector, which makes SAS ideal for HA
environments.
Syncro CS controllers work together across a shared SAS Fabric to achieve sharing, cache coherency, heartbeat
monitoring and redundancy by using a set of protocols to carry out these functions. At any point in time, a particular
VD is accessed or owned by a single controller. This owned VD is a termed a local VD. The second controller is aware of
the VD on the first controller, but it has only indirect access to the VD. The VD is a remote VD for the second controller.
In a configuration with multiple VDs, the workload is typically balanced across controllers to provide a higher degree
of efficiency.
When a controller requires access to a remote VD, the I/Os are shipped to the remote controller, which processes the
I/O locally. I/O requests that are handled by local VDs are much faster than those handled by remote VDs.
The preferred configuration is for the controller to own the VD that hosts the clustered resource (the MegaRAID
Storage Manager utility shows which controller owns this VD). If the controller does not own this VD, it must issue a
request to the peer controller to ship the data to it, which affects performance. This situation can occur if the
configuration has been configured incorrectly or if the system is in a failover situation.
NOTE Performance tip: You can reduce the impact of I/O shipping by
locating the VD or drive groups with the server node that is primarily
driving the I/O load. Avoid drive group configurations with multiple
VDs whose I/O load is split between the server nodes.
MSM has no visibility to remote VDs, so all VD management operations must be performed locally. A controller that
has no direct access to a VD must use I/O shipping to access the data if it receives a client data request. Accessing the
remote VD affects performance because of the I/O shipping overhead.
Performance tip: Use the MSM utility to verify correct resource ownership and load balancing. Load balancing is a
method of spreading work between two or more computers, network links, CPUs, drives, or other resources. Load
balancing is used to maximize resource use, throughput, or response time. Load balancing is the key to ensuring that
client requests are handled in a timely, efficient manner.
LSI Corporation
- 9 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 2: Hardware and Software Setup
Chapter 2: Hardware and Software Setup
This chapter explains how to set up the hardware and software for a Syncro CS 9271-8i solution with two controller
nodes and shared storage. For this implementation you use a Cluster-in-a-Box (CiB) configuration in which the two
server nodes with Syncro CS controllers and the shared disk drives are installed and connected inside a single CiB
enclosure.
The Syncro CS cluster-in-a-box (CiB) system design combines two servers and a common pool of direct attached
drives within one custom-designed enclosure. After you initially set it up, the CiB system design simplifies the
deployment of two-node clusters because all components and connections are contained in one closed unit.
The Syncro CS cluster-in-a-box configuration requires a specially designed server and storage chassis that includes
two Syncro CS 9271-8i controller boards and multiple SAS disks selected from the LSI disk compatibility list (see the
URL in Section 1.4, Hardware Compatibility).
The LSI Syncro CS 9271-8i controller kit includes the following items:
Two Syncro CS 9271-8i controllers
Two CacheVault Flash Modules 02 (CVFM02) (pre-installed on the controllers)
Two CacheVault Power Modules 02 (CVPM02)
Two CacheVault Power Module mounting clips and hardware
Two CacheVault Power Module extension cables
Two low-profile brackets
Syncro CS 9271-8i Controller Quick Installation Guide
Syncro CS Resource CD
The following figure shows a high-level diagram of a CiB Syncro CS 9271-8i CiB configuration connected to a network.
The diagram includes the details of the SAS interconnections.
LSI Corporation
- 10 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 2: Hardware and Software Setup
Figure 1 CiB Syncro CS 9271-8i Configuration
The Syncro CS 9271-8i controllers in the kit include the CacheVault Module Kit LSICVM01, which consists of a
CacheVault Flash Module 02 (CVFM02), a CacheVault Power Module 02 (CVPM02), a clip, and a cable.
The CVFM02 module is an onboard 1-GB DDR3 1333 MT/s USB flash module that is pre-installed on the Syncro CS
controller. The CVPM02 module is a super-capacitor pack that offers an intelligent backup power supply solution. The
CVPM02 module provides both capacitor charge maintenance and capacitor health monitoring functions similar to
those of an intelligent battery backup unit.
If a power failure occurs, the CVPM02 module unit provides power to transfer the cache memory contents from DRAM
to a nonvolatile flash memory array on the CVFM02 module the next time the controller is powered on. Cached data
can then be written to the storage devices.
NOTE The CVFM02 modules come pre-installed on the Syncro CS 9271-8i
controllers.
&OXVWHULQD%R[&KDVVLV
?
I3#3)
#LIENTWITH
!PPLICATIONS
3ULYDWH/$1
.ETWORK
$.33ERVER
6HUYHU1RGH
0RGXOH
3YNCRO#3I
4OP
#ONNECTOR
"OTTOM
#ONNECTOR
6HUYHU1RGH
0RGXOH
3YNCRO#3I
3XEOLF/RFDO$UHD1HWZRUN/$1
I3#3)
#LIENTWITH
!PPLICATIONS
I3#3)
#LIENTWITH
!PPLICATIONS
3!33INGLE,ANES
3!3&OUR,ANES
#I"%XPANDER
"OARD
#I"$RIVE"ACKPLANE
#I"%XPANDER
"OARD
%XPANDER%XPANDER
3!3$UAL0ORTED$RIVES
4OP
#ONNECTOR
"OTTOM
#ONNECTOR
LSI Corporation
- 11 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 2: Hardware and Software Setup
Syncro CS Cluster-in-a-Box Hardware Setup
2.1 Syncro CS Cluster-in-a-Box Hardware Setup
Follow these steps to set up the hardware for a Syncro CS CiB configuration. The setup procedure might be somewhat
different for CiB models from different vendors, so be sure to also read the CiB manufacturers documentation.
NOTE The Syncro CS solution includes two Syncro CS controllers, so you
must perform the installation procedure for both controllers, one in
each server module inside the CiB enclosure.
1. Unpack the controller kit in a static-free environment and check the contents. Remove the components from the
antistatic bags and inspect them for damage. If any components are missing or damaged, contact LSI or your
Syncro CS OEM support representative.
2. Turn off power to the CiB enclosure, if necessary, and unplug the power cords from the power supply. Remove the
cover from the CiB.
CAUTION Before you install the Syncro CS controllers, make sure that the CiB
enclosure is disconnected from the power and from any networks.
3. Review the Syncro CS connectors and the jumper settings and change them if necessary.
You usually do not need to change the default factory settings of the jumpers. The following figure shows the
location of the jumpers and connectors on the controller board.
The CVFM02 module comes preinstalled on your Syncro CS controller; however, the module is not attached in the
following figure so that you can see all of the connectors and headers on the controller. Figure 3 and Figure 4
show the controller with the CVFM02 module installed.
Figure 2 Layout of the Syncro CS 9271-8i Controller
In the figure, Pin 1 is highlighted in red for each jumper.
?
*!
*"
*"
*"
*"
*!
*"
*!
*!
*"
*,
*!
*!
*"
0ORTS
0ORTS
*"
*"
*"
LSI Corporation
- 12 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 2: Hardware and Software Setup
Syncro CS Cluster-in-a-Box Hardware Setup
The following table describes the jumpers and connectors on the Syncro CS 9271-8i controller.
Table 1 Syncro CS 9271-8i Controller Jumpers and Connectors
Jumper/
Connector
Type Description
J1A2 SCS Backplane Management connector 3-pin shielded header
Implements an enclosure management module that is responsible
for providing enclosure management functions for storage
enclosures and server backplanes.
J1A3 Local Battery Backup Unit connector 20-pin connector
Connects the LSIiBBU09 unit directly to the RAID controller.
J1A4 SCS Backplane Management connector 3-pin shielded header
Implements an enclosure management module that is responsible
for providing enclosure management functions for storage
enclosures and server backplanes.
J1B1 Individual PHY and Drive Fault Indication
header
Ports 3 to 0
Ports 7 to 4
2x8-pin header
Connects to an LED that indicates whether a drive is in a fault
condition. There is one LED per port. When lit, each LED
indicates the corresponding drive has failed or is in the
Unconfigured-Bad state.
The LEDs function in a direct-attach configuration (there are no
SAS expanders). Direct attach is defined as a maximum of one
drive connected directly to each port.
J2B4 Standard Edge Card connector The interface between the RAID controller and the host system.
This interface provides power to the board and an I
2
C interface
connected to the I
2
C bus for the Intelligent Platform Management
Interface (IPMI).
J3A1 Drive Activity LED header 2-pin connector
Connects to an LED that indicates activity on the drives connected
to the controller.
J3L1 Remote Battery Backup connector 20-pin connector
Not used.
J4B1 CacheVault Flash Module DDR3 interface 70-pin connector
Connects the Syncro CS controller to the CVFM04 module.
J5A1 Internal x4 SFF-8087 mini SAS connector Internal x4 SAS connector.
J5A2 Write-Pending LED header 2-pin connector
Connects to an LED that indicates when the data in the cache has
yet to be written to the storage devices. Used when the write-back
feature is enabled.
J5B1 Internal x4 SFF-8087 mini SAS connector Internal x4 SAS connector.
J6B1 Advanced Software Options Hardware Key
header
3-pin header
Enables support for the Advanced Software Options features.
J6B2 SBR Bypass header 2-pin connector
Lets you bypass the SBR EEPROM during boot in case the EEPROM
contents are corrupt or missing.
J6B3 Global Activity LED header 2-pin header
Connects to a single LED that indicates drive activity on
either port.
LSI Corporation
- 13 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 2: Hardware and Software Setup
Syncro CS Cluster-in-a-Box Hardware Setup
4. Place the Syncro CS controller on a flat, clean, static-free surface after grounding yourself.
NOTE If you want to replace a bracket, refer to the Replacing Brackets on
MegaRAID SAS+SATA RAID Controllers Quick Installation Guide for
instructions.
5. Take the cable included with the kit and insert the smaller of the two 6-pin cable connectors on the cable into the
6-pin connector on the remote CVPM02 module, as shown in the following figure.
Figure 3 Connecting the Cable to the Remote CVPM02 Module
6. Mount the CVPM02 module inside the CiB enclosure, based on the location and the type of mounting option.
J6B4 Onboard Serial UART connector Reserved for LSI use.
J6B5 Global Drive Fault LED indicator 2-pin header
Connects to a single LED that indicates drive activity on
either port.
J6B6 Onboard Serial UART connector Reserved for LSI use.
Table 1 Syncro CS 9271-8i Controller Jumpers and Connectors (Continued)
Jumper/
Connector
Type Description
3_01851-00
J2B1
CVPM02
Module
CVFM02
Module
LSI Corporation
- 14 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 2: Hardware and Software Setup
Syncro CS Cluster-in-a-Box Hardware Setup
NOTE Because CiB enclosures vary from one vendor to another, no standard
mounting option exists for the CVPM02 module that is compatible
with all CiB configurations. Authorized resellers and chassis
manufacturers can customize the location of the power module to
provide the most flexibility within various environments. Refer to the
instructions from the CiB vendor to determine how to mount the
CVPM02 module.
7. Make sure that the power to the CiB is still turned off, that the power cord is unplugged.
8. Insert the controller into a PCIe® slot on the motherboard of the server module, as shown in the following figure.
Press down gently, but firmly, to seat the controller correctly in the slot.
NOTE This controller is a PCIe x8 card that can operate in x8 or x16 slots.
Some x16 PCIe slots, however, support only PCIe graphics cards; if a
controller is installed in one of these slots, the controller will not
function. Refer to the guide for your motherboard for information
about the PCIe slot.
Figure 4 Installing the Syncro CS 9271-8i Controller and Connecting the Cable
9. Secure the controller to the computer chassis with the bracket screw.
10. Insert the other cable into the 6-pin cable connector on the onboard CVFM02 module, as shown in Figure 4.
11. Repeat step 4 to step 10 to install the second Syncro CS controller and the CVPM02 module in the second server
module of the CiB enclosure.
Edge of
Motherboard
PCIe Slot
Bracket
Screw
Press
Here
Press
Here
3_01784-00
LSI Corporation
- 15 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 2: Hardware and Software Setup
Syncro CS Cluster-in-a-Box Software Setup
12. If necessary, install SAS disk drives in the CiB enclosure.
NOTE Drives that are used in the LSI Syncro CS solution must be selected
from the compatibility list that LSI maintains on its web site. See the
URL for this compatibility list in Section 1.4, Hardware Compatibility.
13. Use SAS cables to connect the internal connectors of the Syncro CS controllers to SAS devices in the CiB
enclosure. Refer to the CiB manufacturer’s documentation for information on connecting SAS cables.
NOTE CiB chassis manufacturers determine the availability and
configuration of external JBOD storage expansion connections. Refer
to the instructions from the CiB vendor on how to properly configure
SAS cable connections for external JBOD enclosures, if your CiB chassis
offers this capability. Also, see the LSI hardware compatibility list at the
UL listed in Section 1.4, Hardware Compatibility.
14. Reinstall the cover of the CiB enclosure and reconnect the power cords. Turn on the power to the CiB enclosure.
The firmware takes several seconds to initialize. During this time, the controllers scan the ports.
2.2 Syncro CS Cluster-in-a-Box Software Setup
Perform the following steps to set up the software for a Syncro CS CiB configuration.
1. Configure the drive groups and the virtual drives on the two controllers.
For specific instructions, see Section 3.1, Creating Virtual Drives on the Controller Nodes. You can use the
WebBIOS, StorCLI, or MegaRAID Storage Manager configuration utilities to create the groups and virtual drives.
2. Install the operating system driver on both server nodes.
You must install the software driver first, before you install the operating system.
You can view the supported operating systems and download the latest drivers for the Syncro CS controllers from
the LSI website at http://www.lsi.com/support/Pages/download-search.aspx. Access the download center, and
follow the steps to download the appropriate driver.
Refer to the MegaRAID SAS Device Driver Installation User Guide on the Syncro CS Resource CD for more information
about installing the driver. Be sure to review the readme file that accompanies the driver.
3. Install the operating system on both server nodes, following the instructions from the operating system vendor.
Make sure you apply all of the latest operating system updates and service packs to ensure proper functionality.
You have two options for installing the operating system for each controller node:
Install it on a private volume connected to the system-native storage controller. The recommended best
practice is to install the operating system on this private volume because the disks in the clustering
configuration cannot see this volume. Therefore, no danger exists of accidentally overwriting the operating
system disk when you set up clustering.
Install it on an exclusive virtual drive connected to the Syncro CS 9271-8i controller. Exclusive host access is
required for a boot volume so the volume is not overwritten accidentally when you create virtual drives for
data storage. For instructions on creating exclusive virtual drives using the WebBIOS utility, see Section 3.1.1,
Creating Shared or Exclusive VDs with the WebBIOS Utility.
NOTE The Syncro CS 9271-8i solution does not support booting from a
shared operating system volume.
4. Install StorCLI and MegaRAID Storage Manager for the Windows® and Linux® operating systems following the
installation steps outlined in the StorCLI Reference Manual and MegaRAID SAS Software User Guide on the Syncro CS
Resource CD.
LSI Corporation
- 16 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 3: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
Chapter 3: Creating the Cluster
This chapter explains how to set up HA-DAS clustering on a Syncro CS 9271-8i configuration after you configure the
hardware and install the operating system.
3.1 Creating Virtual Drives on the Controller Nodes
The next step is creating VDs on the disk drives.
The HA-DAS cluster configuration requires a minimum of one shared VD to be utilized as a quorum disk to enable
operating system support for clusters. Refer to the MegaRAID SAS Software User Guide for information about the
available RAID levels and the advantages of each one.
As explained in the instructions in the following sections, VDs created for storage in an HA-DAS configuration must be
shared. If you do not designate them as shared, the VDs are visible only from the controller node from which they
were created.
You can use the WebBIOS pre-boot utility to create the VDs. You can also use the LSI MegaRAID Storage Manager
(MSM) utility or the StorCLI utility to create VDs after the OS has booted. Refer to the MegaRAID SAS Software User
Guide for complete instructions on using these utilities.
3.1.1 Creating Shared or Exclusive VDs with the WebBIOS Utility
To coordinate the configuration of the two controller nodes, both nodes must be booted into the WebBIOS pre-boot
utility. The two nodes in the cluster system boot simultaneously after power on, so you must rapidly access both
consoles. One of the systems is used to create the VDs; the other system simply remains in the pre-boot utility. This
approach keeps the second system in a state that does not fail over while the VDs are being created on the first
system.
NOTE The WebBIOS utility cannot see boot sectors on the disks. Therefore,
be careful not to select the boot disk for a VD. Preferably, unshare the
boot disk before doing any configuration with the pre-boot utility. To
do this, select Logical Drive Properties and deselect the Shared
Virtual Disk property.
Follow these steps to create VDs with the WebBIOS utility.
1. When prompted during the POST on the two systems, use the keyboard to access the WebBIOS pre-boot BIOS
utility (on both systems) by pressing Ctrl-H.
Respond quickly, because the system boot times are very similar and the time-out period is short. When both
controller nodes are running the WebBIOS utility, follow these steps to create RAID 5 arrays.
NOTE To create a RAID 0, RAID 1, or RAID 6 array, modify the instructions to
select the appropriate number of disks.
2. Click Start.
LSI Corporation
- 17 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 3: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
3. On the WebBIOS main page, click Configuration Wizard, as shown in the following figure.
Figure 5 WebBIOS Main Page
The first Configuration Wizard window appears.
4. Select Add Configuration and click Next.
5. On the next wizard screen, select Manual Configuration and click Next.
The Drive Group Definition window appears.
6. In the Drives panel on the left, select the first drive, then hold down the Ctrl key and select more drives for the
array, as shown in the following figure.
Figure 6 Selecting Drives
LSI Corporation
- 18 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 3: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
7. Click Add To Array, click ACCEPT, and click Next.
8. On the next screen, click Add to SPAN, then click Next.
9. On the next screen, click Update Size.
10. Select Provide Shared Access on the bottom left of the window, as shown in the following figure.
Alternatively, deselect this option to create an exclusive VD as a boot volume for this cluster node.
Figure 7 Virtual Drive Definition
The Provide Shared Access option enables a shared VD that both controller nodes can access. If you uncheck this
box, the VD has a status of Exclusive, and only the controller node that created this VD can access it.
11. On this same page, click Accept, then click Next.
12. On the next page, click Next.
13. Click Ye s to accept the configuration.
14. Repeat the previous steps to create the other VDs.
As the VDs are configured on the first controller node, the other controller node’s drive listing is updated to reflect
the use of the drives.
15. When prompted, click Ye s to save the configuration, and click Ye s to confirm that you want to initialize it.
16. Define hot spare disks for the VDs to maximize the level of data protection.
NOTE The Syncro CS 9271-8i solution supports global hot spares and
dedicated hot spares. Global hot spares are global for the cluster, not
for a controller.
17. When all VDs are configured, reboot both systems as a cluster.
LSI Corporation
- 19 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 3: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
3.1.2 Creating Shared or Exclusive VDs with StorCLI
StorCLI is a command-line-driven utility used to create and manage VDs. StorCLI can run in any directory on the
server. The following procedure assumes that a current copy of the 64-bit version of StorCLI is located on the server in
a common directory as the StorCLI executable and the commands are run with administrator privileges.
1. At the command prompt, run the following command:
storcli /c0/vall show
The c0 parameter presumes that there is only one Syncro CS 9271-8i controller in the system or that these steps
reference the first Syncro CS 9271-8i controller in a system with multiple controllers.
The following figure shows some sample configuration information that appears in response to the command.
Figure 8 Sample Configuration Information
The command generates many lines of information that scroll down in the window. You need to use some of this
information to create the shared VD.
2. Find the Device ID for the JBOD enclosure for the system and the Device IDs of the available physical drives for the
VD you will create.
In the second table in the preceding figure, the enclosure device ID of 252 appears under the heading EID, and
the device ID of 0 appears under the heading DID. Use the scroll bar to find the device IDs for the other physical
drives for the VD.
Detailed drive information, such as the drive group, capacity, and sector size, follows the device ID in the table
and is explained in the text below the table.
3. Create the shared VD using the enclosure and drive device IDs with the following command line syntax:
Storcli /c0 add vd rX drives=e:s
The HA-DAS version of StorCLI creates, by default, a shared VD that is visible to all cluster nodes.
LSI Corporation
- 20 -
Syncro CS 9271-8i Solution User Guide
October 2013
Chapter 3: Creating the Cluster
Creating Virtual Drives on the Controller Nodes
The following notes explain the command line parameters.
The /c0 parameter selects the first Syncro CS 9271-8i controller in the system.
The add vd parameter configures and adds a VD (logical disk).
The rX parameter selects the RAID level, where X is the level.
The opening and closing square brackets define the list of drives for the VD. Each drive is listed in the form
enclosure device ID: [slot]drive device ID.
NOTE To create a VD that is visible only to the node that created it (such as
creating a boot volume for this cluster node), add the
[ExclusiveAccess] parameter to the command line.
For more information about StorCLI command line parameters, refer to the MegaRAID SAS Software User Guide.
3.1.3 Creating Shared or Exclusive VDs with MSM
Follow these steps to create VDs for data storage with MSM. When you create the VDs, you assign the Share Virtual
Drive property to them to make them visible from both controller nodes. This example assumes you are creating a
RAID 5 redundant VD. Modify the instructions as needed for other RAID levels.
NOTE Not all versions of MSM support HA-DAS. Check the release notes to
determine if your version of MSM supports HA-DAS. Also, see Section
5.1, Verifying HA-DAS Support in Tools and the OS Driver.
1. In the left panel of the MSM Logical pane, right-click the Syncro CS 9271-8i controller and select Create Virtual
Drive from the pop-up menu.
The Create Virtual Drive wizard appears.
2. Select the Advanced option and click Next.
3. In the next wizard screen, select RAID 5 as the RAID level, and select unconfigured drives for the VD, as shown in
the following figure.
Figure 9 Drive Group Settings
  • Page 1 1
  • Page 2 2
  • Page 3 3
  • Page 4 4
  • Page 5 5
  • Page 6 6
  • Page 7 7
  • Page 8 8
  • Page 9 9
  • Page 10 10
  • Page 11 11
  • Page 12 12
  • Page 13 13
  • Page 14 14
  • Page 15 15
  • Page 16 16
  • Page 17 17
  • Page 18 18
  • Page 19 19
  • Page 20 20
  • Page 21 21
  • Page 22 22
  • Page 23 23
  • Page 24 24
  • Page 25 25
  • Page 26 26
  • Page 27 27
  • Page 28 28
  • Page 29 29
  • Page 30 30
  • Page 31 31
  • Page 32 32
  • Page 33 33
  • Page 34 34
  • Page 35 35
  • Page 36 36
  • Page 37 37
  • Page 38 38
  • Page 39 39
  • Page 40 40
  • Page 41 41
  • Page 42 42
  • Page 43 43
  • Page 44 44
  • Page 45 45
  • Page 46 46
  • Page 47 47
  • Page 48 48
  • Page 49 49
  • Page 50 50
  • Page 51 51
  • Page 52 52
  • Page 53 53
  • Page 54 54
  • Page 55 55
  • Page 56 56
  • Page 57 57
  • Page 58 58
  • Page 59 59
  • Page 60 60
  • Page 61 61
  • Page 62 62
  • Page 63 63
  • Page 64 64
  • Page 65 65
  • Page 66 66
  • Page 67 67
  • Page 68 68
  • Page 69 69
  • Page 70 70
  • Page 71 71
  • Page 72 72
  • Page 73 73
  • Page 74 74
  • Page 75 75
  • Page 76 76
  • Page 77 77
  • Page 78 78
  • Page 79 79
  • Page 80 80

LSI Syncro CS 9271-8i Solution User guide

Category
Software manuals
Type
User guide

Ask a question and I''ll find the answer in the document

Finding information in a document is now easier with AI