SC860_056_056 / FW860.10
11/18/16 |
Impact:
New
Severity: New
New features and functions
- Support enabled for Live Partition Mobility (LPM)
operations.
- Support enabled for partition Suspend and Resume from the
HMC.
- Support enabled for partition Remote Restart.
- Support enabled for PowerVM vNIC. PowerVM vNIC combined
many of the best features of SR-IOV and PowerVM SEA to provide a
network solution with options for advanced functions such as Live
Partition Mobility along with better performance and I/O efficiency
when compared to PowerVM SEA. In addition PowerVM vNIC provided
users with bandwidth control (QoS) capability by leveraging SR-IOV
logical ports as the physical interface to the network.
- Support for dynamic setting of the Simplified Remote
Restart VM property, which enables this property to be turned on or off
dynamically with the partition running.
- Support for PowerVM and HMC to get and set the boot
list of a partition.
- Support for PowerVM partition restart in a Disaster
Recovery (DR) environment.
- Support on PowerVM for a partition with 32 TB
memory. AIX, IBM i and Linux are supported but IBM i must be IBM
i 7.3. TR1 IBM i 7.2 has a limit of 16 TB per partition and IBM i
7.1 has a limit of 8 TB per partition. AIX level must be 7.1S or
later. Linux distributions supported are RHEL 7.2 P8, SLES
12 SP1, Ubuntu 16.04 LTS, RHEL 7.3 P8, SLES 12 SP2, Ubuntu
16.04.1, and SLES 11 SP4 for SAP HANA.
- Support for PowerVM and PowerNV (non-virtualized or OPAL
bare-metal) booting from a PCIe Non-Volatile Memory express (NVMe)
flash adapter. The adapters include feature codes #EC54 and #EC55
- 1.6 TB, and #EC56 and #EC57 - 3.2 TB NVMe flash adapters
with CCIN 58CB and 58CC respectively.
- Support for PowerVM NovaLink V1.0.0.4 which includes the
following features:
- IBM i network boot
- Live Partition Mobility (LPM) support for inactive source VIOS
- Support for SR-IOV configurations, vNIC, and vNIC failover
- Partition support for Red Hat Enterprise Linux
- Support for a decrease in the amount of PowerVM memory
needed to support Huge Dynamic DMA Window (HDDW) for a PCI slot by
using 64K pages instead of 4K pages. The hypervisor only
allocates enough storage for the Enlarged IO Capacity (Huge Dynamic DMA
Window) capable slots to map every page in main storage with 64K pages
rather than 4K pages as was done previously. This affects only
the Linux OS as AIX and IBM i do not use HDDW.
- Support added to reduce the number of error logs and
call homes for the non-critical FRUs for the power and thermal faults
of the system.
- Support for redundancy in the the transfer of partition
state for Live Partition Mobility (LPM) migration operations.
Redundant VIOS Mover Service Partitons (MSPs) can be defined along with
redundant network paths at the VIOS/MSP level. When redundant MSP
pairs are used, the migrating memory pages of the logical partition are
transferred from the source system to the target system by using two
MSP pairs simultaneously. If one of the MSP pair fails, the migration
operation continues by using the other MSP pair. In some scenarios,
where a common shared Ethernet adapter is not used, use redundant MSP
pairs to improve performance and reliability.
Note: For a LPM migration for a partition using Advanced Memory
Sharing (AMS) in a dual (redundant) MSP configuration the LPM operation
may hang if the MSP connection fails during the LPM migration. To avoid
this issue that applies only to AMS partitions, the AMS
migrations should only be done from the HMC command line using the
migrlpar command and specifying --redundentmsp 0 to disable the
redundant MSPs.
Note: To use redundant MSP pairs, all VIOS MSPs must be at version
2.2.5.00 or later, the HMC at version 8.6.0 or later, and the firmware
level FW860 or later.
For more information on LPM and VIOS supported levels and restrictions,
refer to the following links on the IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/PurePower/p8hc3/p8hc3_firmwaresupportmatrix.htm
https://www.ibm.com/support/knowledgecenter/HW4L4/p8eeo/p8eeo_ipeeo_main.htm
- Support for failover capability for vNIC client adapters in
the PowerVM hypervisor, rather than requiring the failover
configuration to be done in the client OS. To create a redundant
connection, the HMC adds another vNIC server with the same remote lpar
ID and remote DRC as the first, giving each server its own priority.
- Support for SAP HANA with Solution edition with feature
code #EPVR on 3.65 GHZ processors and 12-core activations and 512 GB
memory activations on SUSE Linux.. SAP HANA is an in-memory
platform for processing high volumes of data in real-time. HANA allows
data analysts to query large volumes of data in real-time. HANA's
in-memory database infrastructure frees analysts from having to load or
write-back data.
- Support for the Hardware Management Console (HMC) to
access the service processor IPMI credentials and to retrieve
Performance and Capacity Monitor (PCM) data for viewing in a tabular
format or for exporting as CSV values. The enhanced HMC interface can
now start and stop VIOS Shared Storage Pool (SSP) monitoring from the
HMC and start and stop SSP historical data aggregation.
- Support for the Advanced System Management Interface (ASMI)
was changed to not create VPD deconfiguration records and call home
alerts for hardware FRUs that have one VPD chip of a redundant pair
broken or inaccessible. The backup VPD chip for the FRU allows
continued use of the hardware resource. The notification of the
need for service for the FRU VPD is not provided until both of the
redundant VPD chips have failed for a FRU.
System firmware changes that affect all systems
- A problem was fixed
for a failed IPL with SRC UE BC8A090F that does not have a hardware
callout or a guard of the failing hardware. The system may be
recovered by guarding out the processor associated with the error and
re-IPLing the system. With the fix, the bad processor core is
guarded and the system is able to IPL.
- A problem was fixed for an infrequent service processor
failover hang that results in a reset of the backup service processor
that is trying to become the new primary. This error occurs more
often on a failover to a backup service processor that has been in that
role for a long period of time (many months). This error can
cause a concurrent firmware update to fail. To reduce the chance
of a firmware update failure because of a bad failover, an
Administrative Failover (AFO) can be requested from the HMC prior to
the start of the firmware update. When the AFO has completed, the
firmware update can be started as normally done.
- A problem was fixed for an Operations Panel Function 04
(Lamp test) during an IPL causing the IPL to fail. With the fix,
the lamp test request is rejected during the IPL until the hypervisor
is available. The lamp test can be requested without problems
anytime after the system is powered on to hypervisor ready or an OS is
running in a partition.
- A problem was fixed for On-Chip Controller (OCC) errors
that had excessive callouts for processor FRUs. Many of the OCC
errors are recoverable and do not required that the processor be called
out and guarded. With the fix, the processors will only be called
out for OCC errors if there are three or more OCC failures during a
time period of a week.
- A problem was fixed for the loss of the setting for the
disable of a periodic notification for a call home error log after a
failover to the backup service processor on a redundant service
processor system. The call home for the presence of a failed
resource can get re-enabled (if manually disabled in ASMI on the
primary service processor) after a concurrent firmware update or any
scenario that causes the service processor to fail over and change
roles. With the fix, the periodic notification flag is
synchronized between the service processors when the flag value is
changed.
- A problem was fixed for the On-Chip Controller (OCC)
incorrectly calling out processors with SRC B1112A16 for L4 Cache DIMM
failures with SRC B124E504. This false error logging can occur if
the DIMM slot that is failing is adjacent to two unoccupied DIMM slots.
- A problem was fixed for CEC drawer deconfiguration during a
IPL due to SRCs BC8A0307 and BC8A1701 that did not have the correct
hardware callout for the failing SCM. With the fix, the failing
SCM is called out and guarded so the CEC drawer will IPL even though
there is a failed processor.
- A problem was fixed for device time outs during a IPL
logged with a SRC B18138B4. This error is intermittent and no
action is needed for the error log. The service processor
hardware server has allotted more time of the device transactions to
allow the transactions to complete without a time-out error.
System firmware changes that affect certain systems
- DISRUPTIVE:
On systems using the PowerVM
firmware, a problem was fixed for an "Incomplete" state caused by
initiating a resource dump with selector macros from NovaLink (vio
-dump -lp 1 -fr). The failure causes a communication
process stack
frame, HVHMCCMDRTRTASK, size to be exceeded with a hypervisor page
fault that disrupts the NovalLink and/or HMC communications. The
recovery action is to re-IPL the CEC but that will need to be done
without the assistance of the management console. For each
partition
that has a OS running on the system, shut down each partition from the
OS. Then from the Advanced System Management Interface
(ASMI), power
off the managed system. Alternatively, the system power button
may
also be used to do the power off. If the management console
Incomplete
state persists after the power off, the managed system should be
rebuilt from the management console. For more information on
management console recovery steps, refer to this IBM Knowledge Center
link: https://www.ibm.com/support/knowledgecenter/en/POWER7/p7eav/aremanagedsystemstate_incomplete.htm.
The fix is disruptive because the size of the PowerVM hypervisor must
be increased to accommodate the over-sized stack frame of the failing
task.
- DEFERRED: On
systems using the PowerVM
firmware, a problem was fixed for a CAPI function unavailable condition
on a system with the maximum number of CAPI adapters and
partitions.
Not enough bytes were allocated for CAPI for the maximum configuration
case. The problem may be circumvented by reducing the number of
active
partitions or CAPI adapters. The fix is deferred because
the size of
the hypervisor must be increased to provide the additional CAPI space.
- DEFERRED:
On systems using PowerVM
firmware, a problem was fixed for cable card capable PCI slots that
fail during the IPL. Hypervisor I/O Bus Interface UE B7006A84 is
reported for each cable card capable PCI slot that doesn't
contain a
PCIe3 Optical Cable Adapter for the PCIe Expansion Drawer (feature code
#EJ05). PCI slots containing a cable card will not report an
error but
will not be functional. The problem can be resolved by performing
an
AC cycle of the system. The trigger for the failure is the I2C
devices
used to detect the cable cards are not coming out of the power on reset
process in the correct state due to a race condition.
- On systems using PowerVM firmware, a problem was fixed for
network issues, causing critical situations for customers, when an
SR-IOV logical port or vNIC is configured with a non-zero Port VLAN ID
(PVID). This fix updates adapter firmware to 10.2.252.1922, for
the following Feature Codes: EN15, EN16, EN17, EN18, EN0H, EN0J, EL38,
EN0M, EN0N, EN0K, EN0L, and EL3C.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/HW4M4/p8efd/p8efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- On systems using the PowerVM firmware, a problem was fixed
for a Live Partition Mobility migration that resulted in the source
managed system going to the management console Incomplete state after
the migration to the target system was completed. This problem is
very rare and has only been detected once.. The problem trigger is that
the source partition does not halt execution after the migration to the
target system. The management console went to the
Incomplete state for the source managed system when it failed to delete
the source partition because the partition would not stop
running. When this problem occurred, the customer network was
running very slowly and this may have contributed to the failure.
The recovery action is to re-IPL the source system but that will need
to be done without the assistance of the management console. For
each partition that has a OS running on the source system, shut down
each partition from the OS. Then from the Advanced System
Management Interface (ASMI), power off the managed system.
Alternatively, the system power button may also be used to do the power
off. If the management console Incomplete state persists after
the power off, the managed system should be rebuilt from the management
console. For more information on management console recovery
steps, refer to this IBM Knowledge Center link: https://www.ibm.com/support/knowledgecenter/en/POWER7/p7eav/aremanagedsystemstate_incomplete.htm
- On systems using PowerVM firmware, a problem was
fixed for a shared processor pool partition showing an incorrect zero
"Available Pool Processor" (APP) value after a concurrent firmware
update. The zero APP value means that no idle cycles are present
in the shared processor pool but in this case it stays zero even when
idle cycles are available. This value can be displayed using the
AIX "lparstat" command. If this problem is encountered, the
partitions in the affected shared processor pool can be dynamically
moved to a different shared processor pool. Before the dynamic
move, the "uncapped" partitions should be changed to "capped" to
avoid a system hang. The old affected pool would continue to have the
APP error until the system is re-IPLed.
- On systems using PowerVM firmware, a problem was fixed for
a latency time of about 2 seconds being added to a target Live
Partition Mobility (LPM) migration system when there is a latency time
check failure. With the fix, in the case of a latency time check
failure, a much smaller default latency is used instead of two
seconds. This error would not be noticed if the customer system
is using a NTP time server to maintain the time.
- On multi-node systems with a incorrect memory configuration
of DDR3 and DDR4 DIMMs, a problem was fixed for the IPL hanging for
four hours instead of terminating immediately.
- On systems using PowerVM firmware, a rare problem was
fixed for a system hang that can occur when dynamically moving
"uncapped" partitions to a different shared processor pool. To
prevent a system hang, the "uncapped" partitions should be changed to
"capped" before doing the move.
- On systems using the PowerVM firmware, support was added
fora new utility option for the System Management Services (SMS)
menus. This is the SMS SAS I/O Information Utility. It has
been introduced to allow an user to get additional information about
the attached SAS devices. The utility is accessed by selecting
option 3 (I/O Device Information) from the main SMS menu, and then
selecting the option for "SAS Device Information".
- On systems using the PowerVM hypervisor firmware and
Novalink, a problem was fixed for a NovaLink installation error where
the hypervisor was unable to get the maximum logical memory buffer
(LMB) size from the service processor. The maximum supported LMB
size should be 0xFFFFFFFF but in some cases it was initialized to a
value that was less than the amount of configured memory, causing the
service processor read failure with error code 0X00000134.
- On systems using the PowerVM hypervisor firmware and CAPI
adapters, a problem was fixed for CAPI adapter error recovery.
When the CAPI adapter goes into the error recovery state, the Memory
Mapped I/O (MMIO) traffic to the adapter from the OS continues,
disrupting the recovery. With the fix, the MMIO and DMA traffic
to the adapter are now frozen until the CAPI adapter is fully
recovered. If the adapter becomes unusable because of this
error, it can be recovered using concurrent maintenance steps from the
HMC, keeping the adapter in place during the repair. The error
has a low frequency since it only occurs when the adapter has failed
for another reason and needs recovery.
- On systems using the PowerVM hypervisor firmware, when
using affinity groups, if the group includes a VIOS, ensure the group
is placed in the same drawer where the VIOS physical I/O is
located. Prior to this change, if the VIOS was in an
affinity group with other partitions, the partitions placement could
over-ride the VIOS adapter placement rules and the VIOS could end up in
a different drawer from the IO adapters.
- On systems using PowerVM firmware, a problem was
fixed to improve error recovery when attempting to boot an iSCSI target
backed by a drive formatted with a block size other than 512
bytes. Instead of stopping on this error, the boot attempt fails
and then continues with the next potential boot device.
Information regarding the reason for the boot failure is available in
an error log entry. The 512 byte block size for backing devices
for iSCSI targets is a partition firmware requirement.
- On systems using PowerVM firmware, a problem was fixed for
extra resources being assigned in a Power Enterprise Pool
(PEP). This only occurs if all of these things happen:
o Power server is in a PEP pool
o Power server has PEP resources assigned to it
o Power server powered down
o User uses HMC to 'remove' resources from the powered-down
server
o Power server is then restarted. It should come up with no
PEP resources, but it starts up and shows it still is using PEP
resources it should not have.
To recover from this problem, the HMC 'remove' of the PEP resources
from the server can be performed again.
- On systems using PowerVM firmware, a problem was fixed for
a false thermal alarm in the active optical cables (AOC) for the PCIe3
expansion drawer with SRCs B7006AA6 and B7006AA7 being logged every 24
hours. The AOC cables have feature codes of #ECC6 through #ECC9,
depending on the length of the cable. The SRCs should be ignored
as they call for the replacement of the cable, cable card, or the
expansion drawer module. With the fix, the false AOC thermal
alarms are no longer reported.
- On systems using PowerVM firmware that have an attached
HMC, a problem was fixed for a Live Partition Mobility migration
that resulted in a system hang when an EEH error occurred
simultaneously with a request for a page migration operation. On
the HMC, it shows an incomplete state for the managed system with
reference code A181D000. The recovery action is to re-IPL the
source system but that will need to be done without the assistance of
the HMC. From the Advanced System Management Interface
(ASMI), power off the managed system. Alternatively, the
system power button may also be used to do the power off. If the
HMC Incomplete state persists after the power off, the managed system
should be rebuilt from the HMC. For more information on HMC
recovery steps, refer to this IBM Knowledge Center link: https://www.ibm.com/support/knowledgecenter/en/POWER7/p7eav/aremanagedsystemstate_incomplete.htm
|