VH920_089_075 / FW920.24
02/12/19 |
Impact:
Performance
Severity: SPE
New Features and Functions
- Support for up to 8 production SAP HANA LPARs and 64 TB of
memory.
System firmware changes that
affect all systems
- A problem was fixed for a concurrent firmware update that
could hang during the firmware activation, resulting in the system
entering into Power safe mode. The system can be recovered by
doing a re-IPL of the system with a power down and power up. A
concurrent remove of this fix to the firmware level FW920.22 will
fail with the hang, so moving back to this level should only be done
with a disruptive firmware update.
- A problem was fixed where installing a partition with a NIM
server may fail when using an SR-IOV adapter with a Port VLAN ID (PVID)
configured. This error is a regression problem introduced in the
11.2.211.32 adapter firmware. This fix reverts the adapter
firmware back to 11.2.211.29 for the following Feature
Codes: EN15, EN16 EN17, EN18, EN0H, EN0J, EN0K, and
EN0L. Because the adapter firmware is reverted to the prior
version, all changes included in the 11.2.211.32 are reverted as
well. Circumvention options for this problem can be found at the
following link: http://www.ibm.com/support/docview.wss?uid=ibm10794153.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
System firmware changes that affect certain systems
- On IBM Power System E980 (9080-M9S) systems with
three or four nodes, a problem with slower than expected L2 cache
memory update response was fixed to improve system performance for some
workloads. The slowdown was triggered by many concurrent
processor threads trying to update the L2 cache memory atomicallly with
a Power LARX/STCX instruction sequence. Without the fix,
the rate that the system could do these atomic updates was slower than
the normal L2 cache response which could cause the system overall
performance to decrease. This problem could be noticed for
workloads that are cache bound (where speed of cache access is an
important factor in determining the speed at which the program gets
executed). For example, if the most visited part of a program is a
small section of code inside a loop small enough to be contained within
the cache, then the program may be cache bound.
|
VH920_075_075 / FW920.20
11/16/18 |
Impact:
Data
Severity: HIPER
New features and functions
- Support was added
for three and four node configurations of the IBM Power System
E980 (9080-M9S).
- Support was added for concurrent maintenance of SMP cables.
- Support was enabled for eRepair spare lane deployment for
fabric and memory buses.
- Support was added for run-time recovery of clock cards that
have failed because of a loss of lock condition. The Phased Lock
Loop (PLL) 842S02 loses lock occasionally, but this should be a fully
recoverable condition. Without this support, the disabled clock
card would have to be replaced with the system powered down.
- Support was added for processor clock failover.
- Support was added for Multi-Function clock card failover.
System firmware changes that affect all systems
- HIPER/Non-Pervasive:
DISRUPTIVE: Fixes included to address potential scenarios
that could result in undetected data corruption, system hangs, or
system terminations.
- DISRUPTIVE: A
problem was fixed for PCIe and SAS adapters in slots attached to a PLX
(PCIe switch) failing to initialize and not being found by the
Operating System. The problem should not occur on the first IPL
after an AC power cycle, but subsequent IPLs may experience the problem.
- DEFERRED: A
problem was fixed for a possible system
slow-down with
many BC10E504 SRCs logged for Power Bus hang recovery. This
could
occur during periods of very high activity for memory write operations
which inundate a specific memory controller. This slow-down is
localized to a specific region of real memory that is mapped to memory
DIMMs associated with the congested memory controller.
- DEFERRED: A
problem was fixed for a logical partition
(LPAR)
running slower than expected because of an overloaded ABUS socket for
its SMP connection. This fix requires a re-IPL of the system to
balance the distribution of the LPARs to the ABUS sockets.
- DEFERRED: A
problem was fixed for a PCIe clock
failure in the PCIe3
I/O expansion drawer (feature #EMX0), causing loss of PCIe
slots. The
system must be re-IPLed for the fix to activate.
- DEFERRED:
A problem was fixed to increase the
Vio voltage level
to the processors to protect against lower voltage level and noise
margins that could result in degraded performance or loss of processor
cores in rare instances.
- DEFERRED: A
problem was fixed for a possible system
hang in the early
boot stage. This could occur during periods of very high
activity for
memory read operations which deplete all read buffers, hanging an
internal process that requires a read buffer, With the fix, a
congested memory controller can stall the read pipeline to make a read
buffer available for the internal processes.
- DEFERRED: A
problem was fixed for concurrent maintenance operations for PCIe
expansion drawer cable cards and PCI adapters that could cause
loss of system hardware information in the hypervisor with these side
effects: 1) partition secure boots could fail with SRC BA540100
logged.; 2) Live Partition Mobility (LPM) migrations could be blocked;
3) SR-IOV adapters could be blocked from going into shared mode; 4)
Power Management services could be lost; and 5) warm re-IPLs of the
system can fail. The system can be recovered by powering off and
then IPLing again.
- A problem was fixed for memory DIMM temperatures reported
with an incorrect FRU callout. The error happens for only certain
memory configurations.
- A problem was fixed for in the Dynamic Platform Organizer
(DPO) for calculating the amount of memory required for partitions with
Virtual Page Tables that are greater than 1 LMB in size. This
causes affinity scores which are not correct for the partition.
- A problem was fixed for an unhelpful error message of
"HSCL1473 Cannot execute atomic operation. Atomic operations are not
enabled." that is displayed on the HMC if there are no licensed
processors available for the boot of a partition.
- A problem was fixed for power supply non-power faults being
logged indicating a need for power supply replacement when the service
processor is at a high activity level, but the power supply is working
correctly. This service processor performance problem may
also prevent legitimate power supply faults from being logged, with
other communication and non-power faults being logged instead.
- A problem was fixed for FSP cable failures with B150492F
logged that have the wrong FSP cable called out.
- A problem was fixed for a false extra Predictive Error of
B1xxE550 that can occur for a node with a recoverable event if
Hostboot is terminating on a different node at the same time. The
Predictive Error log will not have a call-out and can be ignored.
- A problem was fixed for a memory channel failure due to a
RCD parity error calling out the affected DIMMs correctly, but also
falsely calling out either the memory controller or a processor, or
both.
- A problem was fixed for adapters in slots attached to a PLX
(PCIe switch) failing with SRCs B7006970 and BA188002 when a
second and subsequent errors on the PLX failed to initiate PLX
recovery. For this infrequent problem to occur, it requires a
second error on the PLX after recovery from the first error.
- A problem was fixed for a processor core checkstop that
would deconfigure two cores: the failed core and a working
core. The bad core is determined by matching to the error log and
the false bad core will have no error log. To recover the loss of
the good core, the guard can be cleared on core that does not have an
associated error log.
- A problem was fixed for the system going into Safe Mode
after a run-time deconfiguration of a processor core, resulting
in slower performance. For this problem to occur, there must be a
second fault in the Power Management complex after the processor
core has been deconfigured.
- A problem was fixed for service processor resets
confusing the wakeup state of processor cores, resulting in
degraded cores that cannot be managed for power usage. This will
result in the system consuming more power, but also running slower due
to the inability to make use of WOF optimizations around the
cores. The degraded processor cores can be recovered by a
re-IPL of the system.
- A problem was fixed for incorrect Resource
Identification (RID) numbers for the Analog Power Subsystem Sweep
(APSS) chip, used by OCC to tune the processor frequency. Any
error call-out on the APSS may call out the wrong APSS.
- A problem was fixed for the On-Chip Controller (OCC) MAX
memory bandwidth sensor sometimes having values that are too high.
- A problem was fixed for DDR4 memory training in the IPL to
improve the DDR4 write margin. Lesser write margins can
potentially cause memory errors.
- A problem was fixed for a system failure with SRC B700F103
that can occur if a shared-mode SR-IOV adapter is moved from a
high-performance slot to a lower performance slot. This
problem can be avoided by disabling shared mode on the SR-IOV adapter;
moving the adapter; and then re-enabling shared mode.
- A problem was fixed for the system going to Safe Mode if
all the cores of a processor are lost at run-time.
- A problem was fixed for a Core Management Engine
(CME) fault causing a system failure with SRC B700F105 if
processor cores had been guarded during the IPL.
- A problem was fixed for a Core Management Engine
(CME) fault that could result in a system checkstop.
- A problem was fixed for a missing error log for the case of
the TPM card not being detected when it is required for a trusted boot.
- A problem was fixed for a flood of BC130311 SRCs that could
occur when changing Energy Scale Power settings, if the Power
Management is in a reset loop because of errors.
- A problem was fixed for coherent accelerator processor
proxy (CAPP) unit errors being called out as CEC hardware
Subsystem instead of PROCESSOR_UNIT.
- A problem was fixed for a repeating failover for the
service processors if a TPM card had failed and been replaced.
The TPM update was not synchronized to the backup service processor,
creating a situation where a service processor failover could
fail and retry because of mismatched TPM capabilities.
- A problem was fixed for an incorrect processor callout on a
memory channel error that causes a CHIFIR[61] checkstop on the
processor.
- A problem was fixed for a Logical LAN (l-lan) device
failing to
boot when there is a UDP packet checksum error. With the fix,
there is a new option when configuring a l-lan port in SMS to enable or
disable the UDP checksum validation. If the adapter is already
providing the checksum validation, then the l-lan port needs to have
its validation disabled.
- A problem was fixed for a power fault causing a power guard
of a node, but leaving the node configured. This can cause
redundant logging of errors as operations are continued to be attempted
on failed hardware.
- A problem was fixed for missing error logs for hardware
faults if the hypervisor terminates before the faults can be
processed. With the fix, the hardware attentions for the bad FRUs
will get handled, prior to processing the termination of the
hypervisor.
- A problem was fixed for the diagnostics for a system boot
checkstop failing to isolate to the bad FRU if it occurred on a
non-master processor or a memory chip connected to a non-master
processor. With the fix, the fault attentions from a
non-master processor are properly isolated to the failing chip so it
can be guarded or recovered as needed to allow the IPL to continue.
- A problem was fixed for Hostboot error log IDs (EID)
getting reused from one IPL to the next, resulting in error logs
getting suppressed (missing) for new problems on the subsequent
IPLs if they have a re-used EID that was already present in the service
processor error logs.
- A problem was fixed so the green USB active LED is lit for
the service processor that is in the primary role. Without the
fix, the green LED is always lit for the service processor in the C3
position which is FSP-A, regardless of the role of the service
processor.
- A problem was fixed for Live Partition Mobility (LPM)
partition migration to preserve the Secure Boot setting on the target
partition. Secure Boot is supported in FW920 and later
partitions. If the Secure Boot setting is non-zero for the
partition, it will zero after the migration.
- A problem was fixed for an SR-IOV adapter using the wrong
Port VLAN ID (PVID) for a logical port (VF) when its non-zero
PVID could be changed following a network install using the logical
port.
This fix updates adapter firmware to 11.2.211.32 for the
following Feature Codes: EN15, EN17, EN0H, EN0J, EN0M, EN0N,
EN0K, and EN0L.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed for a SMS ping failure for a SR-IOV
adapter VF with a non-zero Port VLAN ID (PVID). This failure may
occur after the partition with the adapter has been booted to AIX, and
then rebooted back to SMS. Without the fix, residue information
from the AIX boot is retained for the VF that should have been cleared.
- A problem was fixed for a SR-IOV adapter vNIC configuration
error that did not provide a proper SRC to help resolve the issue of
the boot device not pinging in SMS due to maximum transmission unit
(MTU) size mismatch in the configuration. The use of a vNIC
backing device does not allow configuring VFs for jumbo frames when the
Partition Firmware configuration for the adapter (as specified on the
HMC) does not support jumbo frames. When this happens, the vNIC
adapter will fail to ping in SMS and thus cannot be used as a boot
device. With the fix, the vNIC driver configuration code is
now checking the vNIC login (open) return code so it can issue an SRC
when the open fails for a MTU issue (such as jumbo frame
mismatch) or for some other reason. A jumbo frame is an Ethernet
frame with a payload greater than the standard MTU of 1,500 bytes and
can be as large as 9,000 bytes.
- A problem was fixed for three bad lanes causing a memory
channel fail on the DMI interface. With the fix, the errors
on the third lane on the DMI interface will be recovered and it
will continue to be used as long as it functions.
- A problem was fixed for Power Management errors occurring
at higher ambient temperatures that should instead be handled by
dynamic adjustments to the processor voltages and frequencies.
The Power Management errors can cause the system to drop into Sate Mode
which will provide lower performance until the system is re-IPLed.
- A problem was fixed for preventing loss of function on an
SR-IOV adapter with an 8MB adapter firmware image if it is placed into
SR-IOV shared mode. The 8MB image is not supported at the
FW920.20 firmware level. With the fix, the adapter with the 8MB
image is rejected with an error without an attempt to load the older
4MB image on the adapter which could damage it. This problem
affects the following SR-IOV adapters: #EC2R/#EC2S with
CCIN 58FA; and #EC2T/#EC2U with CCIN 58FB.
- A problem was fixed for incorrect recovery from a service
processor mailbox error that was causing the system IPL to fail with
the loss of all the PCIe links. If this occurs, the system will
normally re-IPL successfully.
- A problem was fixed for SR-IOV adapter failures when
running in shared mode in a Huge Dynamic DMA Window (HDDW) slot.
I/O slots are enabled with HDDW by using the I/O Adapter Enlarged
Capacity setting in the Advanced System Management Interface
(ASMI). This problem can be circumvented by moving the
SR-IOV adapter to a non-HDDW slot, or alternatively, disabling HDDW on
the system.
- A problem was fixed for system termination for a re-IPL
with power on with SRC B181E540 logged. The system can be
recovered by powering off and then IPLing. This problem occurs
infrequently and can be avoided by powering off the system between IPLs.
- A problem was fixed for bad DDR4 DIMMs that fail
initialization but are not guarded or called out, causing a system node
to fail during a IPL.
System firmware changes that affect certain systems
- For a shared memory partition, a problem was fixed
for Live Partition Mobility (LPM) migration hang after a Mover Service
Partition (MSP) failover in the early part of the migration. To
recover from the hang, a migration stop command must be given on the
HMC. Then the migration can be retried.
- For a shared memory partition, a problem was fixed
for Live Partition Mobility (LPM) migration failure to an indeterminate
state. This can occur if the Mover Service Partition (MSP) has a
failover that occurs when the migrating partition is in the state
of "Suspended." To recover from this problem, the partition must
be shutdown and restarted.
- A problem was fixed for an AIX partition not showing the
location codes for the USB controller and ports T1 and T2.
When displaying location codes with OS commands or at the SMS menus,
the location of the USB controller (C13) is missing and ports T1 and T2
are swapped.
- On a system with a Cloud Management Console and a HMC Cloud
Connector, a problem was fixed for memory leaks in the Redfish server
causing Out of Memory (OOM) resets of the service processor.
- On a system with a partition with dedicated processors that
are set to allow processor sharing with "Allow when partition is
active" or "Allow always", a problem was fixed for a potential system
hang if the partition is booting or shutting down while Dynamic
Platform Optimizer (DPO) is running. As a work-around to the
problem, the processor sharing can be turned off before running DPO, or
avoid starting or shutting down dedicated partitions with processor
sharing while DPO is active.
- On a system with an AMS partition, a problem was fixed for
a Live Partition Mobility (LPM) migration failure when migrating from
P9 to a pre-FW860 P8 or P7 system. This failure can occur if the
P9 partition is in dedicated memory mode, and the Physical Page Table
(PPT) ratio is explicitly set on the HMC (rather than keeping the
default value) and the partition is then transitioned to Active Memory
Sharing (AMS) mode prior to the migration to the older system.
This problem can be avoided by using dedicated memory in the partition
being migrated back to the older system.
|