System Storage™ DS Storage Manager version 10.70.x5.25 for Sun Solaris 10 x86 operating systems. Note: Sun Solaris host attachment to the IBM 1722-all models (DS4300), 1724-all models (DS4100), 1742-models 90U/90X (DS4500), 1814-all models (DS4200, DS4700, DS3950 and DS5020), 1815-all models (DS4800), and 1818-all models (DS5100 and DS5300) requires additional purchase of an IBM Sun Solaris Host Kit Option. The IBM DS4000/DS5000 Sun Solaris Host Kit options contain the required IBM licensing to attach a Sun Solaris Host System to the DS3950, DS4100, DS4200, DS4300, DS4500, DS4700, DS4800, DS5020, DS5100, or DS5300 storage subsystems. Please contact your IBM service representative or IBM resellers for purchasing information. Important: A problem causing recursive reboots exists while using 7.36.08 and 7.36.12 firmware on IBM System Storage DS4000 or DS5000 systems. This problem is fixed in 7.36.14.xx and above firmware. All subsystems currently using 7.36.08 and 7.36.12 firmware MUST run a file system check tool (DbFix) before and after the firmware upgrade to 7.36.14.xx or higher. Instructions for obtaining and using DbFix are contained in the 7.36.14.xx or higher firmware package. Carefully read the firmware readme and the DbFix instructions before upgrading to firmware 7.36.14.xx or higher. For subsystems with firmware level 7.36.08 or 7.36.12, configuration changes should be avoided until a firmware upgrade to 7.36.14.xx or higher has been completed successfully. Subsystems not currently using 7.36.08 or 7.36.12 do not need to run DbFix prior to upgrading to 7.36.14.xx or higher. DbFix may be run after upgrading to 7.36.14.xx or higher, but it is not required. DbFix is only applicable to subsystems using 7.36.xx.xx or greater firmware. If problems are experienced using DbFix or the resulting message received is “Check Failed…”, DO NOT upgrade your firmware and contact IBM support before taking any further actions. IMPORTANT: 1. This storage manager software package contains non-IBM code (Open Source code.) Please review and agree to the Non-IBM Licenses and Notices terms stated in the DS_Storage_Manager_Non_IBM_Licenses_and_Notices_v2.pdf file before use. 2. This is NOT the correct DS Storage Manager version 10.70 host software package for the Solaris 9 and 10 Sparc operating systems. Please refer to the DS Storage Manager version 10.70 host software package for the Solaris Sparc operating systems. (C) Copyright International Business Machines Corporation 1999, 2009. All rights reserved. US Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Note: Before using this information and the product it supports, read the general information in section 6.0 "Trademarks and Notices” in this document. Refer to the IBM System Storage™ Support Web Site or CD for the IBM System Storage DS Storage Manager Version 10 Installation and Host Support Guide. This guide along with the DS Storage Manager program Online Help provide the installation and support information. Last Update: 12/14/2010 Please refer to corresponding Change History document for more information on new features and modifications. CONTENTS -------- 1.0 Overview 2.0 Installation and Setup Instructions 3.0 Configuration Information 4.0 Unattended Mode 5.0 Web Sites and Support Phone Number 6.0 Trademarks and Notices 7.0 Disclaimer ======================================================================= 1.0 Overview -------------- 1.1 Overview -------------- The 10.70 version of the DS Storage Manager host software is required for managing all DS3000, DS4000, and DS5000 storage models with controller firmware version 07.70.xx.xx. In addition, it is also recommended for managing models with controller firmware version 05.40.xx.xx or higher installed. Notes: 1. There are separate IBM DS Storage Manager host software packages for Solaris x86 and Solaris Sparc operating systems. The Solaris and Solaris_x86 directories contain the files necessary for installing the IBM DS Storage Manager Version 10.70 host software. Please select the appropriate directory for your operating system environment. The host software and DS3000/DS4000/DS5000 storage subsystem firmware files are packaged separately and are available for download from the IBM System Storage™ Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk 2. The IBM Storage Manager host software version 10.70 new features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. 3. The latest version of the IBM System Storage DS Storage Manager Version 10 Installation and Host Support Guide is also available on IBM's Support web site as a downloadable Portable Document format (PDF) file. Products Supported ----------------------------------------------------------------- | New Model | Old Model | Machine Type | Model | |----------- |-----------|--------------|-------------------------| | DS5300 | N/A | 1818 | 53A | |----------- |-----------|--------------|-------------------------| | DS5100 | N/A | 1818 | 51A | |----------- |-----------|--------------|-------------------------| | DS5020 | N/A | 1814 | 20A | |----------- |-----------|--------------|-------------------------| | DS4800 | N/A | 1815 | 82A, 82H, 84A, 84H, | | | | | 88A, 88H, 80A, 80H | |----------- |-----------|--------------|-------------------------| | DS4700 | N/A | 1814 | 70A, 70H, 72A, 72H, | | | | | 70T, 70S, 72T, 72S, | |----------- |-----------|--------------|-------------------------| | DS4500 | FAStT 900 | 1742 | 90X, 90U | |----------- |-----------|--------------|-------------------------| | DS4400 | FAStT 700 | 1742 | 1RX, 1RU | |----------- |-----------|--------------|-------------------------| | DS4300 | FAStT 600 | 1722 | 60X, 60U, 60J, 60K | | | | | 60L | |----------- |-----------|--------------|-------------------------| | DS4200 | N/A | 1814 | 7VA, 7VH | |----------- |-----------|--------------|-------------------------| | DS4100 | FAStT 100 | 1724 | 100, 1SC | |----------- |-----------|--------------|-------------------------| | DS3950 | N/A | 1814 | 94H, 98H, | |----------- |-----------|--------------|-------------------------| | DS3200 | N/A | 1726 | 21X, 22X, 22T, HC2, HC6 | |----------- |-----------|--------------|-------------------------| | DS3300 | N/A | 1726 | 31X, 32X, 32T, HC3, HC7 | |----------- |-----------|--------------|-------------------------| | DS3400 | N/A | 1726 | 41X, 42X, 42T, HC4, HC8 | |----------- |-----------|--------------|-------------------------| | DS3500 | N/A | 1746 | C2A, A2S, A2D, C4A, | | | | | A4S, A4D | ----------------------------------------------------------------- ATTENTION: 1. For the DS4400 and the DS4100 storage subsystems - all models (Standard /dual controller and Single Controller), the controller firmware version 06.12.xx.xx and later must be used. 2. The DS4300 with Single Controller option(M/T 1722-6LU,6LX,and 6LJ), FAStT200 (M/T 3542-all models) and FAStT500 (M/T 3552-all models) storage subsystems can no longer be managed by DS Storage Manager version 10.50.xx.23 and higher. 3. For the DS3x00 storage subsystems, please refer to the readme files that are posted in the IBM DS3000 System Storage support web site for the latest information about their usage, limitations or configurations. http://www.ibm.com/systems/support/storage/disk ======================================================================= 1.2 Limitations ---------------- IMPORTANT: The listed limitations are cumulative. However, they are listed by DS4000/DS5000 storage subsystem controller firmware and Storage Manager host software releases to indicate which controller firmware and Storage Manager host software release that they were first seen and documented. New limitations with Storage Manager version 10.70.xx.25 release (controller firmware 07.70.xx.xx). 1. Kernel Panic reported on Linux SLES10 SP3 during controller failover. Probability of this issue happening in the field is low. Workaround: Avoid placing controllers online/offline frequently. Novell was informed about this issue. Opened Novell Bugzilla NVBZ589196. 2. CHECK CONDITION b/4e/0 returned during ESM download on DS5000 The interposer running LP1160 firmware is reporting an Overlapped Command (0B/4E/00) for a Read Command containing an OXID that was very recently used for a previous read command on the same loop to the same ALPA. In almost all cases, this command will be re-driven successfully by the controller. Impact should be negligible. Workaround: Can be avoided by halting Volume I/O prior to downloading ESM firmware. 3. If a host sees RAID volumes from the same RAID module that are discovered through different interface protocols (fibre/SAS/iSCSI) failovers will not occur properly and hosts IOs will error out. If the user does not map volumes to a host that can be seen through different host interfaces this problem will not occur. Workaround: Place controller online and reboot the server to see all volumes again. 4. Linux guest OS reported I/O error during Controller Firmware Upgrade on ESX4.1 with Qlogic HBA. The user will see I/O errors during controller activation from the guest OS due to mapped devices becoming inaccessible and offline. Workaround: This issue occurs on various Linux guest OSes, so to avoid the issue the user should perform an offline (no I/O to controllers) controller firmware upgrade. 5. I/O Error on Linux RH 4.8 after rebooting controller on DS3500 iSCSI. The devices will be disconnected from the host until the iSCSI sessions are re-established. workaround: Restart the iSCSI service. Configure the iSCSI service to start automatically on boot. 6. CFW Upgrade on 4.1 w/SLES 11 fails with IO error on SLES 11 Guest partition. This issue occurred if there is filesystem volume in SLES11 VM. User will see I/O errors and Filesystem volumes in SLES11 VMs will be changed to read-only mode. Workaround: User can perform controller FW upgrade with either no I/O running on SLES11 VM or no Filesystem created in SLES11 VMs. 7. "gnome-main-menu" crashes unexpectedly while install HSW on Suse 10. The crashes appear to happen randomly with other applications as well.  After the crash the menu reloads automatically.  Dismiss the prompt and the host will reload the application.  This problem appears to be a vendor issue. 8. I/O errors on RHEL 4.8 guests on VMware 4.1 during controller reset. VMware has suggested VMware 4.1 P01 may resolve this issue. No support was issued for RHEL 4.8 guests under VMware 4.1 over SAS. Workaround: Use RHEL 5.5. 9. VMWare guest OS not accessible on iSCSI DS3524 (VMware SR 1544798051, VMware PR 582256). VMware has suggested VMware 4.1 P01 may resolve this issue. Workaround: Use Fibre Channel or SAS connectivity. 10. When DMMP is running in a BladeCenter SAS environment, I/O errors occur during failover and controller firmware operations. Support for Device Mapper and SLES 11.1 SAS has been restricted and will not be published. Workaround: Install RDAC. 11. SLES11.1ppc SAN boot fails on PS700 with 10 Gb Qlogic ethernet to DS3512. After configuration and install of Linux on a LUN using software iSCSI, the JS blade will not boot into the OS. Workaround: Use local boot, SAS or Fibre Channel. 12. ‘No response’ messages from Device Mapper devices with volumes on non-preferred path. Any I/O operations against volumes mapped to failed devices will timeout or hang. Workaround: This problem requires the host to be rebooted in order to restart I/O successfully. Update SLES11.1 to maintenance update 20101008 containing kernel version 2.6.32.23-0.3.1. Note that this kernel version has not been fully certified by LSI and should only be used if this issue is encountered. Bugzilla #650593 contains issue details and fix provided by Novell. 13. LifeKeeper 7.2.0 recovery kits require multiple host ports to use SCSI reservations. Workaround: Use two single port SAS HBAs. In this case each port will be represented as a host, and LifeKeeper will identify both separately. Another way to avoid the issue is to use MPP as the failover driver. 14. Unexpected "jexec" messages during mpp installation/un-installation on SLES 11.1. Workaround: None - https://bugzilla.novell.com/show_bug.cgi?id=651156 15. I/O write errors on mounted LifeKeeper volumes before node resources transfers to another node. recovery: I/O can be restarted as soon as node resources are transferred to another node in the cluster. Workaround: None. 16. Solaris 10u8 guest OS disks become inaccessible on ESX35u5 during sysReboot on array controller. This can be avoided by not using raw device mappings and instead using virtual disks. If raw device mappings are required, they must be used with virtual compatability mode selected when adding the disks to the guest OS. Workaround: None. 17. Solaris 10u8 guest OSes reported I/O error during Controller Firmware Upgrade on ESX35U5 with Qlogic HBA. Permanent restriction. Workaround: Perform upgrade with no I/O. 18. Solaris guest OS reported I/O error during Controller reset on ESX35U5. The only recovery method is to rebooting the failed VM host. Permanent restriction. New limitations with Storage Manager version 10.70.xx.10 release (controller firmware 07.70.xx.xx). 1. Storage Manager “Error 1000 - Could not communicate with the storage...." can occur when actions such as clearing the configuration on a large system (i.e. 448 drives with a DS5300). The Storage Manager has a 120 second timeout and retries 2 more times when retrieving status. Some actions such as this may take 8 minutes or longer. 2. Array Management Window shows 2048 volume copies allowed when only 2047 maximum volume copies can be created. For a given array, at least one source volume will be present for the volume copy feature; hence, the actual number of “copy relationships” is one less than the maximum volume allowed. 3. FDE Drive in “Security Locked” state is reported as “incompatible” state in the drive profile. The drive can be recovered by importing the correct lock key. 4. Deselecting product components using keyboard when installing Storage Manager still installs them. You will have to use mouse for individual component selection. Limitations with Storage Manager version 10.60.xx.17 release (controller firmware 07.60.xx.xx). 1. This version of the Storage Manager installation package includes the IBM Storage Manager Profiler. This is also referred to as Support Monitor in some of the IBM publications. The following system restrictions apply: Note: Do not use laptops or I/O-attached hosts for installing the IBM Storage Manager Profiler unless these items fulfill all of the following requirements. 1. Minimum 1GB memory, 1.5 GB preferred 2. Minimum 1GB hard drive space, 1.5 GB preferred 3. On average 15 to 20 minutes installation duration 4. Static IP address required 5. The Storage Manager Profiler is installed by default with the IBM DS Storage Manager Install Anywhere (SMIA) package with the Typical (Full Installation) selection. To not install the Storage Manager Profiler, select Custom then deselect Support Monitor. 6. A pre-existing MySQL database on the host must be manually uninstalled before you can install the Storage Manager Profiler. 7. The Storage Manager Profiler installation includes the Apache Tomcat webserver application. Any other pre-existing applications that use Apache Tomcat must be uninstalled before you install the Storage Manager Profiler. 8. Make sure the Storage Manager Profiler directory structure is removed from antivirus and backup applications. 9. MS Internet Explorer and Mozilla Firefox are the only two Web browsers supported with the Storage Manager Profiler client application. 10. By default, the Storage Manager Profiler polls all storage subsystems for storage subsystem support data every day at 2:00am. Ensure that there is not heavy usage of the storage subsystem during this time period. This collection time schedule can be modified to start during the window that the storage subsystem has light usage by pointing your Web browser at http://localhost:9000/ and click on the Calendar icon to schedule the data collection frequencies and time. 11. The Storage Manager Profiler typically takes five minutes to seven minutes to collect data, but the time can increase for larger configurations. There have been exceptional instances where storage arrays took up to 20 minutes to capture the data when capturing the drive information. 12. If you have a large system, gathering support data takes longer, and the compressed files are larger. The Storage Manager Profiler compresses the collection data file to be between 2 MB and 5 MB. 13. Monitor each storage subsystem from one Storage Manager Profiler instance. Gathering data from a storage subsystem with multiple Storage Manager Profiler instances can cause problems. No mechanisms exist that prevent multiple Storage Manager Profiler instances from trying to find data from the same storage subsystem. 14. When multiple Storage Manager instances are installed in the environment, do not define the same storage subsystem twice in the Storage Manager Enterprise Management window. 15. The data collection process of the Storage Manager Profiler is multi- threaded with a polling mechanism in place to find the maximum number of storage subsystems at pre-defined timing intervals. The polling mechanism is not sequential. For example, if the upper limit on the number of storage subsystems from which the Storage Manager Profiler can find data is 20, and 60 storage subsystems are defined, 20 threads are gathered immediately, while the data from the remaining 40 storage subsystems are gathered as resources became available. The Storage Manager Profiler supports at least 10 storage subsystems. 16. The Storage Manager Profiler does not support in-band management of storage subsystems. 17. Storage Manager Profiler is supported with Solaris version 10 only. 18. The only function that the Storage Manager Profiler supports is the Support Monitor. 19. A minimum of 06.19.xx.xx controller firmware is required to use the Storage Manager Profiler. 2. The online help of the Command Line Interface commands (SMcli commands) incorrectly showed the parameters of certain SMcli commands. The parameters that were shown incorrectly belong to the "disk" type parameter such as Drive, Drivechannels, DiskDrives, repositoryDrives ... These parameters were incorrectly shown with the additional word "disk" such as diskDrive, DiskDriveChannels, repositoryDiskDrive ... The SMcli command syntax checking engine will display the correct syntax for these parameters. Make the modifications as suggested by the SMcli command syntax checking engine. One can also refer to the pdf copy of the Command Line Interface and Script Commands Programming Guide - IBM System Storage DS4200, DS4700, and DS5000 in the IBM DS support web site for help on the SMcli command syntax. 3. The suggested recovery path in the mis-wired guru event might send the user down either the wrong path or very long path when trying to fix the first mis-wired port. The work-around is to use the cabling diagram in the EXP5060 Installation, User's and Maintenance guide or the DS5100 and DS5300 Installation, User's and Maintenance guide publications to fix the cabling mis-wired error. The pdf version of these pubs are available in the IBM DS support web site. New limitations with Storage Manager version 10.60.xx.11 release (controller firmware 07.60.xx.xx). 1. If port 1 of the iSCSI host interface is disconnected when the iSCSI configuration window is first opened, the status of port 1 shows as connected. If you switch to port 2 and back to port 1, the status will refresh and port 1 will show as disconnected. 2. When modifying the iSCSI host port attributes there can be a delay of up to 3 minutes if the port is inactive. If possible, connect a cable between the iSCSI host port and a switch prior to configuring any parameters. 3. When applying the Premium feature keys to enable the 64 or 112 drive attachment to the DS5020 subsystem, the "drive slot limit" premium feature entry in the Premium Features and Feature Pack Information window will not be updated to reflect the maximum number of drives that can be attached to the subsystem (either 64 or 112). In addition, when trying to attach more drives than the DS5020 is currently allowed, the "drive slot limit" premium feature entry in the Premium Features and Feature Pack Information window will be displayed with the "out of compliance status". To view the maximum number of drives that can be attached to a DS5020, obtain the Storage Subsystem profile and search for the following info. Drive Limit Management: Number of drive slots discovered: 32 Number of drive slots allowed: 112 Note: The controllers use the discovered drive slot in the subsystem configuration number to determine whether the maximum number of drives limit is exceed or not instead of the actual number of drives installed. For example, if there are two drive expansion enclosures attached to the DS5020, the number of discovered drive slot will be 42 which is exceeding the limit of 32 that the DS5020 supported as standard feature even the total number of installed drives actually installed in the subsystem is 32 or less. In this case, the drive slot limit will be shown with the status Out-of-Compliance until the DS5020 33-64 drive attach premium feature key is applied. (XB014946) 4. The Configure iSCSI host port window always displayed the iSCSI host port number with respect to the installed iSCSI port in the subsystem only. It does not take into account of other types of host port installed in the subsystem. Its port counting starts with the first discovered iSCSI port in the subsystem instead of the first host port installed in the subsystem per controller host port scanning order. For example, the DS5020 controller with iSCSI port option will have the port labeled as ports 1 and 2 per controller in the Configure iSCSI host port window instead of as ports 3 and 4 because there are two FC ports installed in the controller boards. However, the MEL entries will refer to the iSCSI posts as ports 3 and 4 instead of as ports 1 and 2 as referred to in the Configure iSCSI host port window. Similarly, in the DS5100 and DS5300 subsystems, the iSCSI host porta in HIC slot 2 will be referred to as ports 1 and 2 in the Configure iSCSI host port window and ports 5 and 6 in the MEL if the FC host card is installed in HIC slot 1. (XB014900) 5. The Premium Feature activation confirmation window for Drive Security (FDE) feature activation key will show the feature as "Unknown capacity". Click Ok to confirm that you want to go ahead with the enablement of the Drive Security (FDE) feature. 6. When doing a Refresh DHCP operation in the Configure iSCSI port window, and the iSCSI port is unable to contact the DHCP server, the following inconsistent informational Mel events can be reported; - MEL 1810 DHCP failure - MEL 1807 IP address failure - MEL 1811 DHCP success. For this error to occur, the following specific conditions and sequence must have been met: - Before enabling DHCP, static addressing was used on the port and that static address is still valid on the network. - The port was able to contact the DCHP server and acquire an address. - Contact with the DHCP server is lost. - The user performs the Refresh DHCP operation from DS Storage Manager Client. - Contact with the DHCP server does not come back during the DHCP refresh operation and timeouts. Check the network connection to your iSCSI port and the status of the DHCP server before attempting the Refresh DHCP operation again. New limitations with Storage Manager Upgrade Utility package version 10.60.xx.05 release (controller firmware 07.60.xx.xx). 1. Sun Solaris Sparc 9 support for DS4000 Storage Subsystems only. 2. With Veritas DMP, "Mirroring a volume by using option 6 to the vxdiskadm command fails if the device discovery layer chooses a secondary path to a device in an A/PF array. Mirror it from the primary controller. This can prevent install SANboot disk with enabled mpxio failover driver to allow boot from either controller A or controller B 3. Unable to do SAN boot on LP11002 PCIX Card on x86 servers with BIOS version 2.01a2. New limitations with Storage Manager Upgrade Utility package version 10.50.xx.23 release (controller firmware 07.50.xx.xx). 1. The Storage Manager client will not display the encryption capable drives correctly whenever the user tries to download NVSRAM. This issue can be resolved by manually resizing the window. 2. The Storage Manager on-line help contains inaccurate information about locking Encryption Capable drives. The current on-line help states "Use this command to set the new security key to lock all of the Encryption capable drives." The on-line help text should state " Use this command to set the security key that is used throughout the storage subsystem to implement the Disk Encryption premium feature. When any security-capable drive in the storage subsystem is assigned to a secured array, that drive will be security-enabled using the security key." 3. The Storage Manager Re-key operations can fail when there are Encryption Capable drives present in the system that are spun down. When drives are marked for export in a storage subsystem the drives are spun down. Attempting to generate a lock key with spun down drives will fail. Remove the exported drives from the subsystem and retry the operation. 4. When using the SMcli to create an Encryption drive security key, the SMcli command will not save the security key in a default location and a separate location at the same time. The new security key is saved in the location the SMcli was executed from if the command does not include a file location with a fully qualified path. 5. The Storage Manager Controller Firmware upgrade status message can be misleading. The Storage Manager Controller Firmware upgrade screen will report a controller is running an unsupported firmware version if the subsystem is unresponsive at the time of the upgrade. New limitations with Storage Manager Upgrade Utility package version 10.50.xx.19 release (controller firmware 07.50.xx.xx). 1. The DS Storage Manager version 10.50.xx.19 will not manage DS4000 Storage Subsystems with Controller firmware levels prior to v5.40.xx.xx. 2. Start of day (a controller reboot) can take a very long time, up to 20 minutes, after a subsystem clear configuration on a DS5000 subsystem. This occurs on very large configurations, now that the DS5000 can support up to 448 drives, and can be exacerbated if there are SATA drives. 3. Veritas cluster server node failure when fast fail is enabled. Recommend setting dmp_fast_fail HBA flag to OFF. Frequency of this causing a node failure is low unless the storage subsystem is experiencing repeated controller failover conditions. 4. At times the link is not restored when inserting a drive side cable into a DS5000 controller, a data rate mismatch occurs. Try reseating the cable again to clear the condition. 5. Due to a timing issue with controller firmware, using SMcli or the script engine to create LUNs and set LUN attributes will sometimes end with a script error. 6. Mapping host port identifiers to host via Script Editor hangs; fails via CLI with "... error code 1”. This will occur when using the CLI/script engine to create an initial host port mapping. 7. Under certain high stress conditions controller firmware upgrade will fail when volumes do not get transferred to the alternate controller quick enough. Upgrade controller firmware during maintenance windows or under low stress IO conditions. 8. After a controller reboot, unmapped volumes are not on preferred path. The Solaris hosts have a 20 second timeout for port retries. A controller will not always initialize the host ports within that time frame, causing the Solaris host to initiate a mode select to force transfer of LUNs. When the controller comes back up the mapped Luns will be transferred back but not the unmapped LUNs. 9. Sun bug CR6751110 - On Solaris 10 U5 or U6, devices larger than or equal to 2TB require descriptor sense data in order to store the LBA in the "information" field. As a result, an IO write error can occur to LUNs of this size during controller failover. 10. Sun bug CR6710177 - On Solaris 10 u6, a host can lose connection with a controller when it reboots. The host actually never logs back into the controller when it comes back online. 11. Sun bug CR6727518 - scsi command timeout leads to write IO error on Solaris 10 with MPXIO and 8G Emulex adapters. Mitigation is to set adisc-support=0 in /kernel/drv/emlxs.conf when using Emulex HBAs. New limitations with Storage Manager Upgrade Utility package version 10.36.xx.13 release (controller firmware 07.36.xx.xx). 1. None New limitations with Storage Manager Upgrade Utility package version 10.36.xx.07b release (controller firmware 07.36.xx.xx). 1. Upgrade of DS4000 to 7.xx.xx.xx using the upgrade tool with non-English OSes will end with an error. The current firmware column still shows the previous version, pending version will show 7.xx.xx.xx, instead of 'none'. Checking the upgrade tool log you will find: [12.11.2008 08:51:16] [rey-ds4700-1] [SUPPORT_SERVICES] [DownloadAndActivate] activation failed There are two ways to check the content of this log: 1. As long as the upgrade tool is still open: Use the 'View log' button 2. If the upgrade tool was closed already: For windows all logs can be found in this folder: C:\Program Files\IBM_DS4000FirmwareUpgrade\client For other operating systems, check in the appropriate folder. Syntax of the name is similar to 20081114_1501.log. Then check the profile. Here the new firmware can be found as pending: Current configuration: Firmware version: 06.60.17.00 NVSRAM version: N1814D470R916V17.dlp Pending configuration: Staged firmware download supported: Yes Firmware version: 07.36.12.00 NVSRAM version: N1814D47R1036V12.dlp This shows that firmware was loaded to controller but not yet activated. Workaround: After checking the items listed in Details section, activate the loaded firmware using this script command: activate storageSubsystem firmware; Mark the affected system in the Enterprise Management window of the Storage Manager. In the menu, go to Tools -> Execute script. A new window will show up. Paste the command in the upper part of this window and choose Tools -> Execute only. The activation will take a while and the controller will reboot. New limitations with Storage Manager Installer (SMIA) package version 10.36.xx.07 release. 1. When you use the “show-children” command, the Solaris host with QLogic host bus adapters(HBAs) does not see any logical unit numbers (LUNs) listed at the OK prompt. This problem occurs when there are Solaris hosts running QLogic HBAs and Emulex HBAs on the same fabric (or zone). Rezone the switch so that the QLogic HBAs and the Emulex HBAs do not share the same zone and are not on the same fabric. 2. Unable to add the Solaris operating system to the SMI-S client because the Solaris Management Console (SMC) application and the SMI-S Provider are trying to use the same system ports. - If you do not use the SMC application, type the /etc/init.d/init.wbem stop command to stop the SMC service. Restart the SMI-S Provider, and the IBM TotalStorage Productivity Center (TPC) application will discover the CIMOM agent. - If you need to use the SMC application, modify the ports in the /opt/engenio/ SMI_SProvider/bin portInfo.properties file. Restart the SMI-S Provider. In the IBM TotalStorage Productivity Center (TPC) application, use the new port number to discover the CIMOM agent. New limitations with Storage Manager Installer (SMIA) package version 10.30.xx.09 release. 1. Failover does not occur when the fibre channel cable is disconnected. After the Fibre Channel cable is disconnected and then connected again, the host tries to send a ModeSense command to the controller of the volume. This command starts automatic failover, but the automatic failover fails. Apply the MPxIO binary fix, Sun bug ID 6656523, for the Solaris operating system. New limitations with Storage Manager Installer (SMIA) package version 10.15.xx.09 release. 1. Reconfiguration operation (DRM) is delayed under some circumstances. When a drive fails before the DRM completes, it can take up to eight times as long for the DRM to complete. DRM reconfigurations to RAID 6 have the longest impact since there are four times as many calculations and writes that have to occur compared to other RAID levels. 2. Host Software display of controller current rate is wrong below 4Gbps. Host Software Client - AMW - Logical/Physical View - Controller Properties - Host Interfaces - Current rate is displaying "Not Available" when the controller negotiated speed is 2Gbps. When the controller negotiated speed is reduced to 1GBs, the Current rate displays "2Gbps" New limitations with Storage Manager Installer (SMIA) package version 10.10.xx.06 release. 1. Controller Alarm Bell icon does not appear as a flashing icon indicator on the screen to get the user's attention but the icon does change appearances. 2. Miswire of drive tray cabling with DS4700 and DS4200 can cause continuous reboot of a controller. To correct this situation, power down the subsystem, cable the drive trays correctly, and power the subsystem back up. 3. Controller button may appeared enabled and mislead the user that a controller is selected where in fact, a controller was not highlighted for the button to appear ready. 4. Search key is not marked correctly in that page due a JavaHelp Bug with JavaHelp 2.0_01. A search for keyword "profile" ended with phase "prese 'nting p' rofile" being marked. 5. storageArrayProfile.txt should be renamed as storageSubsystemProfile.txt in Support Data. 6. Bullets Incorrectly Placed in Volume Modification Help Page 7. Unable to Escape out of Help Display. User will be required to close the window by using window close procedure (exit, etc.) 8. Bullets and Descriptions not alligned into same line in "Viewing mirror properties". 9. The Help window is not getting refreshed properly when using the AMW.Help window. Workaround is to close and reopen Storage Manager client. 10. CLI command failure for creation of volume(s) if capacity parameter syntax is not specified. A space will need to be used between the integer value and the units used in the capacity option of this command or, "create volume volumeGroup[4] capacity=15 GB…". 11. Customer will see high ITW counts displayed in GUI (RLS feature) and logs (files) for diagnostics (support bundle, DDC, etc.) and may be concerned that he has a problem. This will not cause a Critical MEL event. Known problem previously restricted, when a DS4700 or DS4200 controller reboots, these counters increment. 12. Single tray powercycle during IO activity causes drives in tray to become failed. Customer may see loss of drive (failed) due to timing issue of drive detection & spin-up of the drive. There are one of two conditions that result on power-up: - (Most likely) Drive will be marked as optimal/missing with the piece failed, or - (rarely) Drive will be marked as failed with the piece failed. Workaround is to unfail (revive) drive which restarts reconstruction of all pieces. 13. Event log critical Event 6402 was reported after creating 64 mirror relations. Eventually, the mirror state transitions to synchronizing and proceeds to completion on mirror creation. Workaround is to ignore the MEL logging since this occurs on creation of mirror volumes. 14. Reconfiguration operations during host IO may result in IO errors when arrays contain more than 32 LUNs. These operations include Dynamic Capacity Expansion, Defragmentation, Dynamic Volume Expansion, Dynamic RAID Migration. The workaround is to quiesce host IO activity during reconfiguration. 15. Heavy IO to a narrow volume group of SATA drives can result in host IO timeouts. A narrow volume group refers to an array built of very few drives; namely 1 drive RAID 0, 1x1 RAID 1, and 2 + 1 RAID 5. The workaround is to build arrays of SATA drives out of 4 + 1 or greater. 16. When managing the storage subsystem in-band, the upgrade utility will show the upgrade as failed. This is because of the update and reboot of the controllers when activating the new firmware. SMagent is not dynamic and will need to be restarted to reconnect to the storage subsystem. 17. Selecting and dragging text within the storage profile window causes the window to be continuously refreshed. Work around is to select and copy, do not drag the text. 18. When configuring alerts through the task assistant, the option stays open after selecting OK. The window only closes when the cancel button is selected. 19. The Performance Monitor displays error messages when the storage subsystem is experiencing exception conditions. The performance monitor has a lower execution priority within the controller firmware than responding to system IO and can experience internal timeouts under these conditions. 20. Critical MEL event (6402 - Data on mirrored pair unsynchronized) can occur under certain circumstances with synchronous RVM. The most likely scenario is when both primary and secondary are on a remote mirror and an error occurs with access to that host. Resynchronization should occur automatically, when automatic resynchronization is selected for a mirror relationship. However if any of the host sites should go down during this interval, recovery by the user is required. 21. A persistent miswire condition is erroneously reported through the recovery guru even though the subsystem is properly wired. The frequency of occurrence is low and is associated with an ESM firmware download or other reboot of the ESM. The ESM that is reporting the problem must be reseated to eliminate the false reporting. Not all miswire conditions are erroneous and must be evaluated to determine the nature of the error. 22. Drive path loss of redundancy has been reported during ESM download. This occurs when a drive port is bypassed. In some instances this is persistent until the drive is reconstructed. In other cases it can be recovered through an ESM reboot (second ESM download, ESM pull and replace). 23. Unexpected drive states have been observed during power cycle testing due to internal controller firmware contention when flushing MEL events to disk. The drives have been observed as reconstructing or replaced when they should have been reported as failed. Also volume groups have been reported degraded when all drives were assigned and optimal. An indication that this is the situation would be when drive reconstruction has not completed in the expected amount of time and does not appear to be making any progress. The work around is to reboot the controller owning the volume where the reconstruction has stalled. 24. Sometimes when an ESM is inserted a drive's fault line is asserted briefly. The fault line almost immediately returns to inactive, but the ESMs may bypass the drive. In these circumstances, the administrator will have to reconstruct the failed drive. 25. After a drive fail, a manually initiated copyback to a global hot spare may also fail. The work around is to remove the failed drive and reinsert it, then the copyback should resume and complete successfully. 26. When an erroneous miswire condition occurs (as mentioned above in 21), the recovery guru reports the miswire on one controller but not on the other. In this situation, ignore the other controller and use the information supplied by the controller reporting the problem. 27. Occasionally a controller firmware upgrade to 07.10 will unexpectedly reboot a controller an extra time. This could generate a diagnostic data capture, however the firmware upgrade is always successful. 28. When managing previous releases of firmware (06.19 and prior), "working" gets displayed as "worki" during volume creation. 29. The Performance Monitor error window does not come to the front. You must minimize all other foreground windows to get to the error popup window. 30. When a disk array is in a degraded state, the array will report "needs attention" to both the EMW and the AMW. After taking appropriate corrective action, the AMW view of the array will report "fixing" but the EMW state remains at "needs attention". Both statuses are valid, when the fault state is resolved both views will change to "optimal". 31. Solaris 8,9, and 10 running MPXIO may report an IO error with "device busy too long" during controller failovers. The work around is to increase the IO timeout values in sd and ssd drivers to 2 minutes and increase the Not Ready timer to 5 minutes. 32. Host IO errors are report by Solaris 8, 9, and 10 (reported to Sun as SB 6426973). The fix for this is targeted for Solaris 10 update 5. 33. Dynamic Multipathing (DMP) Fast Recovery IO failure analysis may cause an unresponsive system or possible data loss on Solaris systems due to an interoperability issue between QLogic 2Gb and 4Gb HBAs and Veritas Volume Manager versions 4.1 MP1 with 122059-02, 4.1 MP2, 5.0, or 5.0 MP1. The detailed problem description and patch can be found at http://support.veritas.com/docs/292445. 34. Data corruption on reads with Solaris 8, 9, and 10 with the latest Emulex HBA drivers released with Solaris OS's (Sun bug 6611815). Observed with Solaris 10 driver 2.20k and Solaris 8/9 driver SAN 4.4.13. Please use the following drivers: Solaris 10 patch 120222-16 (2.12d), Solaris 9 SAN patch 4.4.12 (patch 119914-11, 1.12b), and Solaris 8 SAN 4.4.12 (patch 119913-11, 1.12b). 35. Configuring separate Email alerts when two Enterprise management windows are open on the same host will cause the alerts to disappear if one of the Enterprise windows is shut down and then restarted. It is recommended that if multiple Enterprise management windows needs to be open, that they are open on separate Hosts which will indeed allow the configuration of alerts to be saved if one of the enterprise management windows is shut down and restarted. Legacy restrictions that are still applicable: 1. Reflected Fibre Channel (FC) OPN frames occurs when intermixing the EXP810, EXP710 and EXP100s behind DS4700 or DS4800 storage subsystems. This behavior causes excessive drive side timeout, drive side link down and drive side link up events be posted in the DS4000 storage subsystem Event log (MEL.) It might also cause drives to be by-passed or failed by the controller. NEW DRIVE SIDE FC CABLING REQUIREMENT MUST BE ADHERED TO WHEN HAVING EXP100 CONNECTED TO THE DS4700 OR DS4800 STORAGE SUBSYSTEMS. Please refer to the latest version of the Installation, User's and Maintenance Guide for these storage subsystems that are posted in the IBM DS4000 Support web site for more information. http://www.ibm.com/systems/support/storage/disk 2. Can not increase the capacity of RAID arrays. RAID arrays with certain combinations of selected segment size and number of drives that made up the arrays will exceed the available working space in controller dacstore, causing a reconfiguration request (like expanding the capacity of the array) to be denied. These combinations are generally the largest segment size (512KB) with the number of drives in the array is 15 drives or more. There is no work-around. C324144 105008 3. Interoperability problem between the Tachyon DX2 chip in the DS4500 and the DS4300 storage subsystem controllers and the Emulex SOC 422 chip in the EXP810 expansion enclosure ESMs causing up to 5 Fibre Channel loop type errors to be posted in the DS4000 storage subsystem Major Event Log during a 24 hour period. There is a small window in the SOC 422 chip that multiple devices can be opened at one time. This ultimately leads to Fibre Channel loop errors of Fibre Channel link up/down, Drive returned CHECK CONDITION, and Timeout on drive side of controller. IBM recommends the use of the Read-Link-Status function to monitor drive loop for any problems in the drive loop/channel. There is no work-around. 4. The single digit of the Enclosure IDs for all enclosures (including the DS4000 storage subsystem with internal drive slots) in a given redundant drive loop/channel pair must be unique. For example, with four enclosures attached to the DS4300, the correct enclosure ID settings should be x1, x2, x3, and x4 (where x can be any digit that can be set). Examples of incorrect settings would be 11, 21, 31, and 41 or 12, 22, 32, and 62. These examples are incorrect because the x1 digits are the same in all enclosure IDs (either 1 or 2). If you do not set the single digit of the enclosure IDs to be unique among enclosures in a redundant drive loop/channel pair, then drive loop/channel errors might be randomly posted in the DS4000 subsystem Major Event Log (MEL), especially in the cases where the DS4300 storage substems are connected to EXP810s and EXP100s. In addition, enclosure IDs with same single digits in a redundant drive loop/channel pair will cause the DS4000 subsystem controller to assign a soft AL_PA address to devices in the redundant drive loop/channel pair. The problem with soft AL_PA addressing is that AL_PA address assignment can change between LIPs. This possibility increases the difficulty of troubleshooting drive loop problems because it is difficult to ascertain whether the same device with a different address or a different device might be causing a problem. 5. In DS4000 storage subsystem configurations with controller firmware 6.15.2x.xx and higher installed, the performance of write intense workloads such as sequential Tape restores to DS4000 Logical drives with large I/O request sizes (e.g. 256kB) is degraded if the DS4000 logical drives are created with small segment sizes such as 8KB or 16KB. The work around is to create the DS4000 logical drives with segment size of 64KB or higher. 6. Do not pull or insert drives during the drive firmware download. In addition, ALL I/Os must also be stopped during the drive firmware download. Otherwise, drives may be shown as missing, unavailable or failed. 7. Do not perform other storage management tasks, such as creating or deleting logical drives, reconstructing arrays, and so on, while downloading the DS4000 storage subsystem controller firmware and DS4000 EXP ESM firmware. It is recommended that you close all storage management sessions (other than the session that you use to upgrade the firmware) to the DS4000 storage subsystem that you plan to update. 8. IBM DS4000 Storage Manager RDAC for Solaris 8 and 9 only. 9. Uninstalling the storage manager host software package might leave the JAVA JRE directory behind. This problem might occur if you uninstall a copy of the Storage Manager client Storage Manager software that has been upgraded from a previous release. The newer installation package could not delete the JRE directory created with an older installation package. The workaround is to manually delete any directories left behind after uninstalling the software. ======================================================================= 1.3 Enhancements ----------------- Host type VMWARE has been added to NVSRAM as an additional host type. DS4200 and DS4700 will use index 21 All other supported systems will use index 16 Although not required, if using a Linux host type for a VMWARE host, it is recommended to move to the VMWARE host type since any upgrading of controller firmware and NVSRAM would continue to require running scripts, whereas using the VMWARE host type does not require running scripts. The DS Storage Manager version 10.70.xx.10 (and 10.70.xx.25) host software in conjunction with controller firmware version 7.70.23.xx and higher provides support for - Support for SSD drives in the DS5020 storage subsystem. - Storage subsystem password required for all subsystems running 7.70.xx.xx controller firmware. Please refer to the New Features section of the IBM System Storage DS Storage Manager Version 10 Installation and Host Support Guide for additional information about the IBM DS Storage Manager version 10 enhancements. ======================================================================= 1.4 Level Recommendation and Prerequisites for the update ----------------------------------------------------------- Note: The IBM Storage Manager host software version 10.70 for Solaris OS new features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. This release of IBM DS Storage Manager host software for Sun Solaris was tested on the following Sun servers and Operating System configurations: The minimum requirement is a Sun server with PCI and/or S-Bus Architecture with: 1. 1 GB system memory. 2. 480 Mhz Processor 3. Ethernet network interface card. 4. CDROM drive. 5. Mouse or similar pointing device. For SMclient, at least 900 MB of available on /opt and root-level (superuser) permission is required for installation Ensure the storage management station or the host acting as a storage management station is running one of the following operating systems: 1. Solaris 10.0 x86 2. The DS4000 Storage Manager version 9.14 and higher host software installer wizard requires the installation of a graphics adapter in the SUN Solaris server for it to run. For servers without the graphics adapter, you can execute the installer via sh -i console or individual host software installation packages are provided in the SM10.70_Solaris_x86_Single-10.70.x5.25.tgz file. 3. Required installation order for Storage Manager 10.70.xx.xx and controller firmware 07.70.xx.xx: 1. SMruntime - always first 2. SMesm - required by client 3. SMclient 4. SMagent 5. SMutil Note: Steps 1-5 will be done by the SMIA installer if the installation is performed using the SMIA installer. 6. Controller firmware and NVSRAM 7. ESM firmware 8. Drive firmware The Sun Solaris Multipath driver support is as follows; 1. SUN Solaris 10 with MPxIO or Veritas DMP. In addition, for DS4000/DS5000 configurations with SUN Solaris servers using Veritas Volume Manager (DMP) as multipath driver, you have to obtain the correct asl file for DS4000/DS5000 storage subsystems using the following Veritas Support URL. http://seer.support.veritas.com/docs/275752.htm Code levels at time of release are as follows ----------------------------------------------- The version of the host-software installer wizard for this release is SMIA-SOLX86-10.70.05.25.bin. Note: The web-download Storage Manager version 10.70 host software package must be first unpacked (tar -xvz) into a user-defined directory. Then, go to this directory, locate the Solaris directory to access the Storage Manager version host software installation file(s). Starting with the DS4000 storage manager version 9.12, all of the host software packages are included in a single Storage Manager host software installer wizard. During the execution of the wizard, the user will have a choice to install all or only certain software packages depending the need for a given server. There must be at least 990MB of free space in the /opt directory for the installer wizard to install the host software packages. * The DS3000/DS4000/DS5000 Storage Manager host software installer wizard requires the installation of a graphics adapter in the SUN Solaris X86 server for it to run. If you wish to use the wizard but do not have a graphics adapter, you can execute the installer via sh -i console to run in console mode. In addition, there must be at least 990MB of free space in the /opt directory for the installer wizard to install the host software packages. For Solaris X86 servers without the graphics adapter, individual host software installation packages are provided in the SM10.70_Solaris_x86_Single-10.70.x5.25.tgz file. This installer wizard will install the following version of the host- software packages SMruntime: 10.70.05.00 SMclient: 10.70.G5.25 SMagent: 10.01.05.03 SMutil: 10.00.05.13 SMesm: 10.70.G5.07 Support Monitor: 04.94.G5.01 Refer to the IBM System Storage™ Interoperation Center (SSIC) web site - http://www.ibm.com/systems/support/storage/ssic/ for information on the latest supported switches and the HBA released code levels. ======================================================================= 1.5 Dependencies ----------------- ATTENTION: 1. The IBM System Storage DS4000 Controller Firmware Upgrade Tool is required to upgrade any system from 6.xx controller firmware to the 7.xx.xx.xx controller firmware. This tool has been integrated into Enterprise Management Window of the DS Storage Manager v10.70 Client. 2. Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the DS4000/DS5000 drive expansion enclosure ESM, the DS4000/DS5000 storage subsystem controller and the hard drive firmware. IBM DS Storage Manager version 10.70 host software requires the DS4000/DS5000 storage subsystem controller firmware be at version 05.40.XX.XX or higher. The IBM DS4000 Storage Manager v9.60 supports storage subsystems with controller firmware version 04.xx.xx.xx up to 05.2x.xx.xx. The IBM DS Storage Manager v10.36 supports storage subsystems with controller firmware version 05.3x.xx.xx to 07.36.xx.xx. ======================================================================= 2.0 Installation and Setup Instructions ----------------------------------------- Note: The web-download Storage Manager version 10.70 host software package must be first unpacked (tar -xvz) into a user-defined directory. Then, go to this directory, locate the Solaris directory to access the Storage Manager version 10.70 host software installation file(s). 2.1 Step-by-step instructions for this code update are as followed ------------------------------------------------------------------- 1. Install and update the driver for the IBM DS4000/DS5000 FC Host Bus Adapter. a. Install the hardware by using the instructions that come with the adapter. b. Install the Fibre Channel Host Bus Adapter driver by using the instructions provided with the adapter. 2. Install the new Storage Manager host software. If there is previous version (7.x, 8.x or 9.1x) of the IBM DS4000 Storage Manager host software (ie. SMRuntime, SMClient, RDAC, SMUtil and SMAgent packages) installed in the system, you have to uninstall it first before installing the new version of the storage manager host software. Refer to the IBM System Storage DS Storage Manager Version 10 Installation and Host Support Guide for detailed installation instructions. Refer to the http://www.ibm.com/systems/support/storage/disk IBM System Storage™ Disk Storage Systems Technical Support web site for the the latest DS Storage Manager host software, the DS4000/DS5000 controllers firmware, the drive expansion enclosure ESM firmware and the hard disk drive code. ======================================================================= 2.2 Helpful Hints ------------------ 1. The DS4500 and DS4300 storage subsystem have updated recommended drive-side cabling instructions with the release of 06.23 controller firmware. The DS4500 instructions are documented in the IBM System Storage DS4500 Installation, Users, and Maintenance Guide (GC27-2051-00 or IBM P/N 42D3302). The DS4300 instructions are documented in the IBM System Storage DS4300 Installation, Users, and Maintenance Guide (GC26-7722-02 or IBM P/N 42D3300). Please follow the cabling instructions in these publication to cable the new DS4500 and DS4300 setup. If you have an existing DS4500 setup with four drive side Menuhin installed that was cabled according to the previously recommended cabling instructions, please schedule down time as soon as possible to make changes to the drive side FC cabling. Refer to the IBM System Storage DS4500 and DS4300 Installation, Users, and Maintenance Guide for more information. 2. For partitioning in clusters. The partition must be done on the host group level not on the host level. This will allow all cluster hosts to see the same storage. 3. Do not delete the Access LUN or Access Volume. The Access LUN is required by the SMclient to communicate with the storage controllers when using the in-band management method. 4. Depending on the storage subsystem that you have purchased, you may have to purchase the FAST storage partitioning premium feature option or an option to upgrade the number of supported partitions in a storage subsystem. Please see IBM Marketing representatives or IBM resellers for more information. - IBM DS4100 storage subsystem (machine type 1724): The standard configuration does not have the FAST storage partitioning premium enabled. Four partition, eight partition, sixteen and an upgrade from four to eight partition storage partitioning premium feature options can be purchased. - IBM DS4200 storage subsystem (machine type 1814): Depending on the ordering channels, the DS420 storage subsystem is either shipped with 2 storage partitions or with a user selected 2, 4, 8, 16 or 64 storage partitions. If the DS4200 storage subsystem is ordered with less than 64 partitions, various storage partition feature upgrade options can be purchased to increase the maximum number of supported storage partitions. - IBM DS4300 storage subsystem (machine type 1722): The standard configuration does not have the FAST storage partitioning premium enabled. Four partition, eight partition, sixteen and an upgrade from four to eight partition storage partitioning premium feature options can be purchased. The IBM DS4300 Turbo option comes with 8 partition storage partitioning premium option installed. An upgrade to 64 partitions can be purchased for the IBM DS4300 Turbo option. - IBM FAStT200, FAStT500, and DS4400 storage subsystems (machine type 3542, 3552 and 1742 - models 1RU and 1RX, respectively): The standard configuration has 64 partitions storage partitioning premium option installed. No additional storage partitioning premium feature options can be purchased for these storage subsystems. - IBM DS4500 storage subsystem (machine type 1742 - models 90X and 90U): The standard configuration has 16 partition storage partitioning premium option installed. An upgrade from 16 to 64 partition storage partitioning premium feature option can be purchased. - IBM DS4700 storage subsystem (machine type 1814): Depending on the ordering channels, the DS4700-model 70 storage subsystem is either shipped with 2 storage partitions or with a user selected 2, 4, 8, 16, 32, 64 or 128 storage partitions. Similarly, the DS4700-model 72 storage subsystem is either shipped with 8 storage partitions or with a user selected 8, 16, 32, 64 or 128 storage partitions. If the DS4700 storage subsystem is ordered with less than 128 partitions, various storage partition feature upgrade options can be purchased to increase the maximum number of supported storage partitions. - IBM DS4800 storage subsystem (machine type 1815): Depending on the ordering channels, the DS4800 storage subsystem is either shipped with 8 storage partitions or with a user selected 8, 16, 32, 64, 128, 256 or 512 storage partitions. If the subsystem is ordered with less than 512 partitions, an upgrade from 8 to 16, 8 to 32, 8 to 64, 16 to 32, 16 to 64, 32 to 64, 32 to 128, 64 to 128, 64 to 256, 128 to 256, 128 to 512, or 256 to 512 partition storage partitioning premium feature option can be purchased. - IBM DS5300 and DS5100 storage subsystem (machine type 1818): the DS5300 and DS5100 storage subsystem is shipped with a user selected 8, 16, 32, 64, 128, 256 or 512 storage partitions. If the subsystem is ordered with less than 512 partitions, various storage partition feature upgrade options can be purchased to increase the maximum number of supported storage partitions. 5. Fabric topology zoning requirement with AIX fcp_array (RDAC) and Solaris RDAC only. To avoid possible problem at the host level, it is best practice that all Fibre Channel (FC) Switches must be zoned such that a single FC host bus adapter can only access one controller per storage array. In addition, this zoning requirement also ensures the maximum number of host connections can be seen and log into the controller FC host port. This is because if a FC HBA port is seen by both controller A and B host ports, it will be counted as two host connections to the storage subsystem - one for controller A port and one for controller B port. Note: The DS4000 storage subsystems DS4500, DS4400 and FAStT500 (IBM machine type 1742 and 3552) have two ports per controller - one per minihub slot. The DS4000 storage subsystems DS4300 (IBM machine type 1722) and DS4100 (IBM machine type 1724) have two ports per controller. The DS4000 storage Server FAStT200 (IBM machine type 3542) has only one port per controller. The DS4700 storage subsystem (IBM machine type 1814) has up to four ports per controller. The DS4800 storage subsystem (IBM machine type 1815) has four ports per controller. 6. Subsystems using dynamic multipathing (DMP) for the failover driver will need to manually do target and LUN definition in /kernel/drv/sd.conf. 7. When installing the SMibmasl.pkg the Solaris host will need a reboot -- -r. 8. To improve the reboot time of Sun 4800 Sunfire server or Sun servers with Solaris 9 installed, remove all ghost targets and LUNs from /kernel/drv/sd.conf file. Failure to remove ghost targets and LUNs may increase the boot time of the Sun servers. 9. Target volumes for VolumeCopy premium feature need the volume permissions changed. By default the VolumeCopy is set to read-only. The target volume permission for the VolumeCopy must be changed to read-write. 10. The DS4000 controller host ports or the Fibre Channel HBA ports can not be connected to a Cisco FC switch ports with "trunking" enable. You might encounter failover and failback problems if you do not change the Cisco FC switch port to "non-trunking" using the following procedure: a. Launch the Cicso FC switch Device Manager GUI. b. Select one or more ports by a single click. c. Right click the port(s) and select Configure, a new window pops up d. Select the "Trunk Config" tab from this window, a new window opens e. In this window under Admin, select the "non-trunk" radio button, it is set to auto by default. f. Refresh the entire fabric. 11. When making serial connections to the DS4000 storage controller, the baud rate is recommended to be set at either 38200 or 57600. Note: Do not make any connections to the DS4000 storage subsystem serial ports unless it is instructed by IBM Support. Incorrect use of the serial port might result in lost of configuration, and possibly, data. 12. All enclosures (including DS4000 storage subsystem with internal drive slots) on any given drive loop/channel should have complete unique ID's, especially the single digit (x1) portion of the ID, assigned to them. For example, in a maximum configured DS4500 storage subsystem, enclosures on one redundant drive loop should be assigned with id's 10-17 and enclosures on the second drive loop should be assigned with id's 20-27. Enclosure id's with the same single digit such as 11, 21 and 31 should not be used on the same drive loop/channel. In addition, for enclosures with mechanical enclosure ID switch like DS4300 storage subsystems, EXP100 or EXP710 storage expansion enclosures, do not use enclosure ID value of 0. The reason is with the physical design and movement of the mechanical enclosure ID switch, it is possible to leave the switch in a “dead zone” between ID numbers, which return an incorrect enclosure ID to the storage management software. The most commonly returned enclosure ID is 0 (zero). In addition to causing the subsystem management software to report incorrect enclosure ID, this behavior also result in enclosure ID conflict error with the storage expansion enclosure or DS4000 storage subsystem intentionally set the ID to 0. The DS4200 and DS4700 storage subsystems and the EXP420 and EXP810 storage expansion enclosures did not have mechanical ID switches. Thus, they are not susceptible to this problem. In addition, these storage subsystems and storage expansion enclosures automatically set the Enclosure IDs. IBM recommendation is not make any changes to these settings unless the automatic enclosure ID settings resulting in non- unique single digit settings for enclosures (including the storage subsystems with internal drive slots) in a given drive loop/channel. 13. The ideal configuration for SATA drives is one drive in each EXP per array, one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used simultaneously, some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes. 14. IBM recommends at least one hot spare per EXP100 drive expansion enclosure. A total of 15 hot-spares can be defined per DS4000 storage subsystem configuration. 15. Starting with the DS4000 Storage Manager (SM) host software version 9.12 or later, the Storage Manager client script window looks for the files with the file type of ".script" as the possible script command files. In the previous versions of the DS4000 Storage Manager host software, the script window looks for the file type ".scr" instead. (i.e. enableAVT.script for SM 9.12 or later vs. enableAVT.scr for pre-SM 9.12) ======================================================================= 3.0 Configuration Information ----------------------------- 3.1 Configuration Settings -------------------------- 1. By default the IBM DS Storage Manager SMClient program does not automatically map logical drives. This means that the logical drives are not automatically presented to host servers. a. For a new installation, after creating new arrays and logical drives, create a storage partition with the host type of Sun-Solaris and map the logical drives to this partition. b. If you are upgrading the NVSRAM with Storage Partitions, you may have to change the default host type to match the host system OS. After upgrading the NVSRAM, the default host type is reset to Windows 2000/Server 2003 non-clustered for DS4000 storage server with controller firmware version 06.14.xx.xx or later. For DS4000 storage server with controller firmware version 06.12.xx.xx or earlier, it is reset to Windows non-clustered (SP5 or higher), instead. Refer to the IBM DS Storage Manager online help to learn more about creating storage partitions and changing host types. 2. When you are configuring IBM machine type 1722, 1724, 1742, 1814, 1815, 1818, 3542 or 3552 storage controllers as boot devices, contact IBM support for supported configurations and instructions for configuring IBM storage controllers as boot devices. 3. Running script files for specific configurations. Apply the appropriate scripts to your subsystem based on the instructions you have read in the publications or any instructions in the operating system readme file. A description of each script is shown below. - SameWWN.script: Setup RAID controllers to have the same World Wide Names. The World Wide Names (node) will be the same for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - DifferentWWN.script: Setup RAID controllers to have different World Wide Names. The World Wide Names (node) will be different for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - EnableAVT_W2K_S2003_noncluster.script: The script will enable automatic logical drive transfer (AVT/ADT) for the Windows 2000/Server 2003 non- cluster heterogenous host region. The default setting is to disable AVT for this heterogenous host region. This setting is one of the requirements for setting up the remote boot or SAN-boot. Do not use this script unless it is specifically mentioned in the applicable instructions. (This script can be used for other host type if modifications are made in the script, replacing the Windows 2000/Server 2003 non-cluster host type with the appropriate host type that needs to have AVT/ADT enabled) - DisableAVT_W2K_S2003_noncluster.script: The script will disable the automatic logical drive transfer (AVT) for the Windows 2000/Server 2003 non-cluster heterogenous host region. This script will reset the Windows 2000/Server 2003 non-cluster AVT setting to the default. (This script can be used for other host type if modifications are made in the script, replacing the Windows 2000/Server 2003 non- cluster host type with the appropriate host type that needs to have AVT/ADT disabled) 4. When you are configuring IBM machine type 1722, 1724, 1742, 1814, 1815, 1818 or 3552 storage servers in host cluster Remote Mirror configurations, contact IBM support for supported host cluster configurations and instructions for configuring IBM storage controllers in a clustered Remote Mirror configurations. ======================================================================= 3.2 Unsupported configurations ------------------------------ The following lists configurations that are currently not being supported with IBM DS Storage Manager Version 10.70. 1. The IBM EXP395 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS3950. EXP810 drive enclosures are also supported in the DS3950 with the purchase of a premium feature key. 2. The IBM EXP520 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5020. EXP810 drive enclosures are also supported in the DS5020 with the purchase of a premium feature key. 3. The IBM EXP5000 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5100 and DS5300. 4. The DS4100 (machine type 1724-all models) storage subsystem does not support the attachment of the DS4000 EXP710, EXP700 and EXP500 (FC) drive expansion enclosure. 5. The DS4800 storage subsystem (machine type 1815-all models) does not support the attachment of the FAStT EXP500 and DS4000 EXP700 drive expansion enclosures. 6. The DS4200 (machine type 1814 - models 7VA/H) does not support the attachment of the DS4000 EXP100 (SATA), EXP710 (FC) and EXP810 (SATA and FC) drive expansion enclosures. In addition, it does not support Fibre Channel disk drive options. 7. The IBM DS4000 EXP420 Expansion Enclosure is not supported attached to any other IBM DS4000 Storage Subsystems except the DS4200. 8. The DS4100 with Single Controller option does not support the attachment of the DS4000 storage expansion enclosures. 9. The DS5100 and DS5300 storage subsystems do not support the attachment of the DS4000 EXP100, EXP700, EXP710 drive expansion enclosures. The EXP810 is only supported through an RPQ process. 10. The DS5000 EXP5000 drive expansion enclosure is supported attached to the DS5100 and DS5300 only. 11. The DS4700 and DS4800 storage subsystems do not support the attachment of the DS4000 EXP700 drive expansion enclosures. The EXP700 enclosure must be upgraded into DS4000 EXP710 enclosure using the DS4000 EXP700 Models 1RU/1RX Switched-ESM Option Upgrade Kit before it can be attached to the DS4700 and DS4800 storage subsystems. 12. The DS4300 storage subsystem with Single Controller option does not support the controller firmware version 06.xx.xx.xx. The correct firmware version for these DS4300 storage subsystem models is 05.34.xx.xx. 13. Fibre Channel loop environments with the IBM Fibre Channel Hub, machine type 3523 and 3534, in conjunction with the IBM Fibre Channel Switch, machine types 2109-S16, 2109-F16 or 2109-S8. In this configuration, the hub is connected between the switch and the IBM Fibre Channel RAID Controllers. 14. The IBM Fibre Channel hub, machine type 3523, connected to IBM machine type 1722, 1724, 1742, 1814, 1815, 3542 and 3552. 15. A configuration in which a server with only one FC host bus adapter connects directly to any DS4000 storage subsystem with dual controllers is not supported. The supported configuration is the one in which the server with only one FC host bus adapter connects to both controller ports of any DS4000 storage subsystem with dual controllers via Fibre Channel (FC) switch (SAN-attached configuration.) Note: The HBA(s) in the Sun server must have FC connection to both controller A and B ports of any DS4000/DS5000 storage subsystem with dual controllers via FC switch attached or via direct attached. ======================================================================= 4.0 Unattended Mode --------------------- N/A ======================================================================= 5.0 WEB Sites and Support Phone Number ---------------------------------------- 5.1 IBM System Storage™ Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk 5.2 IBM System Storage™ Marketing Web Site: http://www.ibm.com/systems/storage/disk 5.3 IBM System Storage™ Interoperation Center (SSIC) web site: http://www.ibm.com/systems/support/storage/ssic/ 5.4 You can receive hardware service through IBM Services or through your IBM reseller, if your reseller is authorized by IBM to provide warranty service. See http://www.ibm.com/planetwide/ for support telephone numbers, or in the U.S. and Canada, call 1-800-IBM-SERV (1-800-426- 7378). IMPORTANT: You should download the latest version of the DS Storage Manager host software, the DS4000/DS5000 storage subsystem controller firmware, the DS4000/DS5000 drive expansion enclosure ESM firmware and the drive firmware at the time of the initial installation and when product updates become available. For more information about how to register for support notifications, see the following IBM Support Web page: www.ibm.com/systems/support/storage/subscribe/moreinfo.html You can also check the Stay Informed section of the IBM Disk Support Web site, at the following address: www.ibm.com/systems/support/storage/disk ======================================================================= 6.0 Trademarks and Notices -------------------------- The following terms are trademarks of the IBM Corporation in the United States or other countries or both: IBM System Storage the e-business logo xSeries, pSeries HelpCenter UNIX is a registered of The Open Group in the United States and other countries. Microsoft, Windows, and Windows NT are of Microsoft Corporation in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. SUN, Solaris, Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. QLogic and SANsurfer are trademarks of QLogic Corporation. Other company, product, or service names may be trademarks or service marks of others. ======================================================================= 7.0 Disclaimer -------------- 7.1 THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS. 7.2 Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corporation.