01VM930_145_035.html Power9 System Firmware
Applies to: 9040-MR9
This document provides information about the installation of Licensed Machine
or Licensed Internal Code, which is sometimes referred to generically as
microcode or firmware.
End Of Service Pack Support
Release Level 930 is in 'End of Service Pack Support' mode (as of October
2021). While IBM will still support systems at this Release Level, fixes will
only be provided at a higher Release Level (950). Product Engineering
recommends that you plan to upgrade to the 950 Release Level during your next
planned firmware maintenance window.
----------------------------------------------------------------------------------
Contents
* 1.0 Systems Affected
* 1.1 Minimum HMC Code Level
* 2.0 Important Information
* 2.1 IPv6 Support and Limitations
* 2.2 Concurrent Firmware Updates
* 2.3 Memory Considerations for Firmware Upgrades
* 2.4 SBE Updates
* 3.0 Firmware Information
* 3.1 Firmware Information and Description Table
* 4.0 How to Determine Currently Installed Firmware Level
* 5.0 Downloading the Firmware Package
* 6.0 Installing the Firmware
* 7.0 Firmware History
----------------------------------------------------------------------------------
1.0 Systems Affected
This package provides firmware for Power Systems E950 (9040-MR9) servers only.
The firmware level in this package is:
* VM930_145 / FW930.50
----------------------------------------------------------------------------------
1.1 Minimum HMC Code Level
This section is intended to describe the "Minimum HMC Code Level" required
by the System Firmware to complete the firmware installation process. When
installing the System Firmware, the HMC level must be equal to or higher than
the "Minimum HMC Code Level" before starting the system firmware update. If
the HMC managing the server targeted for the System Firmware update is running
a code level lower than the "Minimum HMC Code Level" the firmware update will
not proceed.
The Minimum HMC Code levels for this firmware for HMC x86, ppc64 or ppc64le
are listed below.
x86 - This term is used to reference the legacy HMC that runs on
x86/Intel/AMD hardware for both the 7042 Machine Type appliances and the
Virtual HMC that can run on the Intel hypervisors (KVM, VMWare, Xen).
* The Minimum HMC Code level for this firmware is: HMC V9R1M930 (PTF
MH01810).
* Although the Minimum HMC Code level for this firmware is listed above,
V9R2, HMC V9R2M951.2 (PTF MH01892) or higher is recommended to avoid an issue
that can cause the HMC to lose connections to all servers for a brief time with
service events E2FF1409 and E23D040A being reported. This will cause all
running server tasks such as server firmware upgrade to fail.
ppc64 or ppc64le - describes the Linux code that is compiled to run on
Power-based servers or LPARS (Logical Partitions)
* The Minimum HMC Code level for this firmware is: HMC V9R1M930 (PTF
MH01811).
* Although the Minimum HMC Code level for this firmware is listed above,
V9R2, HMC V9R2M951.2 (PTF MH01893) or higher is recommended to avoid an issue
that can cause the HMC to lose connections to all servers for a brief time with
service events E2FF1409 and E23D040A being reported. This will cause all
running server tasks such as server firmware upgrade to fail.
For information concerning HMC releases and the latest PTFs, go to the
following URL to access Fix Central:
http://www-933.ibm.com/support/fixcentral/
For specific fix level information on key components of IBM Power Systems
running the AIX, IBM i and Linux operating systems, we suggest using the Fix
Level Recommendation Tool (FLRT):
http://www14.software.ibm.com/webapp/set2/flrt/home
NOTES:
-You must be logged in as hscroot in order for the firmware
installation to complete correctly.
- Systems Director Management Console (SDMC) does not support
this System Firmware level
2.0 Important Information
NovaLink levels earlier than "NovaLink 1.0.0.16 Feb 2020 release" with
partitions running certain SR-IOV capable adapters is NOT supported at this
firmware release
NovaLink levels earlier than "NovaLink 1.0.0.16 Feb 2020 release" do not
support IO adapter FCs EC2R/EC2S, EC2T/EC2U, EC3L/EC3M, EC66/EC67 with FW930
and later. If the adapter was already in use with FW910/920 at an older
NovaLink level, upgrading to FW930/940 will result in errors in NovaLink and
PowerVC which causes the loss of any management operation via NovaLink /
PowerVC combination. Upgrading systems in this configuration is not supported
at the older NovaLink levels. If the system is required to be at FW930/940 or
was shipped with FW930/940, NovaLink must first be updated to "NovaLink
1.0.0.16 Feb 2020 release" or later.
Possible partition crash when using Live Partition Mobility (LPM) or partition
hang when doing a concurrent firmware update
A very intermittent issue has been found in the IBM lab when using partition
mobility or firmware update. For a mobility operation, the issue can result in
a partition crash if the mobility target system is FW930.00, FW930.01 or
FW930.02. For a code update operation, the partition may hang. The recovery
is to reboot the partition after the crash or hang. This problem is fixed in
service pack FW930.03.
Downgrading firmware from any given release level to an earlier release level
is not recommended
Firmware downgrade warnings:
1) Adapter feature codes (#EC2S/#EC2U and #EC3M and #EC66) when configured in
SR-IOV shared mode in FW930 or later, even if originally configured in shared
mode in a pre-FW930 release, may not function properly if the system is
downgraded to a pre-FW930 release. The adapter should be configured in
dedicated mode first (i.e. take the adapter out of SR-IOV shared mode) before
downgrading to a pre-FW930 release.
2) If partitions have been run in POWER9 compatibility mode in FW940, a
downgrade to an earlier release (pre-FW940) may cause a problem with the
partitions starting. To prevent this problem, the "server firmware" settings
must be reset by rebooting partitions in "Power9_base" before doing the
downgrade.
If you feel that it is necessary to downgrade the firmware on your system to
an earlier release level, please contact your next level of support.
2.1 IPv6 Support and Limitations
IPv6 (Internet Protocol version 6) is supported in the System Management
Services (SMS) in this level of system firmware. There are several limitations
that should be considered. When configuring a network interface card (NIC) for
remote IPL, only the most recently configured protocol (IPv4 or IPv6) is
retained. For example, if the network interface card was previously configured
with IPv4 information and is now being configured with IPv6 information, the
IPv4 configuration information is discarded.
A single network interface card may only be chosen once for the boot device
list. In other words, the interface cannot be configured for the IPv6 protocol
and for the IPv4 protocol at the same time.
2.2 Concurrent Firmware Updates
Concurrent system firmware update is supported on HMC Managed Systems only.
Ensure that there are no RMC connections issues for any system partitions
prior to applying the firmware update. If there is a RMC connection failure to
a partition during the firmware update, the RMC connection will need to be
restored and additional recovery actions for that partition will be required to
complete partition firmware updates.
2.3 Memory Considerations for Firmware Upgrades
Firmware Release Level upgrades and Service Pack updates may consume
additional system memory.
Server firmware requires memory to support the logical partitions on the
server. The amount of memory required by the server firmware varies according
to several factors.
Factors influencing server firmware memory requirements include the following:
* Number of logical partitions
* Partition environments of the logical partitions
* Number of physical and virtual I/O devices used by the logical
partitions
* Maximum memory values given to the logical partitions
Generally, you can estimate the amount of memory required by server firmware
to be approximately 8% of the system installed memory. The actual amount
required will generally be less than 8%. However, there are some server models
that require an absolute minimum amount of memory for server firmware,
regardless of the previously mentioned considerations.
Additional information can be found at:
https://www.ibm.com/support/knowledgecenter/9040-MR9/p9hat/p9hat_lparmemory.htm
2.4 SBE Updates
Power 9 servers contain SBEs (Self Boot Engines) and are used to boot the
system. SBE is internal to each of the Power 9 chips and used to "self boot"
the chip. The SBE image is persistent and is only reloaded if there is a
system firmware update that contains a SBE change. If there is a SBE change
and system firmware update is concurrent, then the SBE update is delayed to the
next IPL of the CEC which will cause an additional 3-5 minutes per processor
chip in the system to be added on to the IPL. If there is a SBE change and the
system firmware update is disruptive, then SBE update will cause an additional
3-5 minutes per processor chip in the system to be added on to the IPL. During
the SBE update process, the HMC or op-panel will display service processor code
C1C3C213 for each of the SBEs being updated. This is a normal progress code
and system boot should be not be terminated by the user. Additional time
estimate can be between 12-20 minutes.
The SBE image is only updated with this service pack if the starting firmware
level is less than FW930.40.
----------------------------------------------------------------------------------
3.0 Firmware Information
Use the following examples as a reference to determine whether your
installation will be concurrent or disruptive.For systems that are not managed
by an HMC, the installation of system firmware is always disruptive.
Note: The concurrent levels of system firmware may, on occasion, contain
fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can
be installed concurrently, but will not be activated until the next IPL.
Partition-Deferred fixes can be installed concurrently, but will not be
activated until a partition reactivate is performed. Deferred and/or
Partition-Deferred fixes, if any, will be identified in the "Firmware Update
Descriptions" table of this document.For these types of fixes (Deferred and/or
Partition-Deferred) within a service pack, only the fixes in the service pack
which cannot be concurrently activated are deferred.
Note: The file names and service pack levels used in the following examples
are for clarification only, and are not necessarily levels that have been, or
will be released.
System firmware file naming convention:
01VMxxx_yyy_zzz
* xxx is the release level
* yyy is the service pack level
* zzz is the last disruptive service pack level NOTE: Values of service pack
and last disruptive service pack level (yyy and zzz) are only unique within a
release level (xxx). For example, 01VM900_040_040 and 01VM910_040_045 are
different service packs.
An installation is disruptive if:
* The release levels (xxx) are different. Example:
Currently installed release is 01VM900_040_040, new release is 01VM910_050_050.
* The service pack level (yyy) and the last disruptive service pack level
(zzz) are the same. Example: VM910_040_040 is disruptive, no
matter what level of VM910 is currently installed on the system.
* The service pack level (yyy) currently installed on the system is lower
than the last disruptive service pack level (zzz) of the service pack to be
installed. Example: Currently installed service pack is
VM910_040_040 and new service pack is VM910_050_045.
An installation is concurrent if:
The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or
higher than the last disruptive service pack level (zzz) of the service pack to
be installed.
Example: Currently installed service pack is VM910_040_040, new service pack
is VM910_041_040.
3.1 Firmware Information and Description
Filename Size Checksum md5sum 01VM930_145_035.rpm 129996415
51714
2cbb872d362e9ce23c101761b6c5df2f
Note: The Checksum can be found by running the AIX sum command against the
rpm file (only the first 5 digits are listed).
ie: sum 01VM930_145_035.rpm
VM930
For Impact, Severity and other Firmware definitions, Please refer to the
below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs
The complete Firmware Fix History for this Release Level can be reviewed at
the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VM-Firmware-Hist.html
VM930_145_035 / FW930.50
9/17/21 Impact: Data Severity: HIPER
New features and functions
* Support was changed to disable Service Location Protocol (SLP) by default
for newly shipped systems or systems that are reset to manufacturing defaults.
This change has been made to reduce memory usage on the service processor by
disabling a service that is not needed for normal system operations. This
change can be made manually for existing customers by changing it in ASMI with
the options "ASMI -> System Configuration -> Security -> External Services
Management" to disable the service. System firmware changes that affect all
systems
* HIPER: A problem was fixed which may occur on a target system following a
Live Partition Mobility (LPM) migration of an AIX partition utilizing Active
Memory Expansion (AME) with 64 KB page size enabled using the vmo tunable: "vmo
-ro ame_mpsize_support=1". The problem may result in AIX termination, file
system corruption, application segmentation faults, or undetected data
corruption.
Note: If you are doing an LPM migration of an AIX partition utilizing AME
and 64 KB page size enabled involving a POWER8 or POWER9 system, ensure you
have a Service Pack including this change for the appropriate firmware level on
both the source and target systems.
* A problem was fixed for a missing hardware callout and guard for a
processor chip failure with SRC BC8AE540 and signature "ex(n0p0c5) (L3FIR[28])
L3 LRU array parity error".
* A problem was fixed for a missing hardware callout and guard for a
processor chip failure with Predictive Error (PE) SRC BC70E540 and signature
"ex(n1p2c6) (L2FIR[19]) Rc or NCU Pb data CE error". The PE error occurs after
the number of CE errors reaches a threshold of 32 errors per day.
* A problem was fixed for a rare failure for an SPCN I2C command sent to a
PCIe I/O expansion drawer that can occur when service data is manually
collected with hypervisor macros "xmsvc -dumpCCData and xmsvc
-logCCErrBuffer". If the hypervisor macro "xmsvc "is run to gather service
data and a CMC Alert occurs at the same time that requires an SPCN command to
clear the alert, then the I2C commands may be improperly serialized, resulting
in an SPCN I2C command failure. To prevent this problem, avoid using xmsvc
-dumpCCData and xmsvc -logCCErrBuffer to collect service data until this fix is
applied.
* A problem was fixed for a system hang or terminate with SRC B700F105 logged
during a Dynamic Platform Optimization (DPO) that is running with a partition
in a failed state but that is not shut down. If DPO attempts to relocate a
dedicated processor from the failed partition, the problem may occur. This
problem can be avoided by doing a shutdown of any failed partitions before
initiating DPO.
* A problem was fixed for a system crash with HMC message HSCL025D and SRC
B700F103 logged on a Live Partition Mobility (LPM) inactive migration attempt
that fails. The trigger for this problem is inactive migration that fails a
compatibility check between the source and target systems.
* A problem was fixed for time-out issues in Power Enterprise Pools 1.0 (PEP
1.0) that can affect performance by having non-optimal assignments of
processors and memory to the server logical partitions in the pool. For this
problem to happen, the server must be in a PEP 1.0 pool and the HMC must take
longer than 2 minutes to provide the PowerVM hypervisor with the information
about pool resources owned by this server. The problem can be avoided by
running the HMC optmem command before activating the partitions.
* A problem was fixed for a Live Partition Mobility (LPM) migration that
failed with the error "HSCL3659 The partition migration has been stopped
because orchestrator detected an error" on the HMC. This problem is
intermittent and rare that is triggered by the HMC being overrun with unneeded
LPM message requests from the hypervisor that can cause a timeout in HMC
queries that result in the LPM operation being aborted. The workaround is to
retry the LPM migration which will normally succeed.
* A problem was fixed for a system becoming unresponsive when a processor
goes into a tight loop condition with an SRC B17BE434, indicating that the
service processor has lost communication with the hypervisor. This problem is
triggered by an SR-IOV shared mode adapter going through a recovery VF reset
for an error condition, without releasing a critical lock. If a later reset is
then needed for the VF, the problem can occur. The problem is infrequent
because a combination of errors needs to occur in a specific sequence for the
adapter.
* A problem was fixed for a misleading SRC B7006A20 (Unsupported Hardware
Configuration) that can occur for some error cases for PCIe #EMX0 expansion
drawers that are connected with copper cables. For cable unplug errors, the
SRC B7006A88 (Drawer TrainError) should be shown instead of the B7006A20. If
a B7006A20 is logged against copper cables with the signature "Prc
UnsupportedCableswithFewerChannels" and the message "NOT A 12CHANNEL CABLE",
this error should instead follow the service actions for a B7006A88 SRC.
* A problem was fixed where the Floating Point Unit Computational Test, which
should be set to "staggered" by default, has been changed in some circumstances
to be disabled. If you wish to re-enable this option, this fix is required.
After applying this service pack, do the following steps:
1) Sign in to the Advanced System Management Interface (ASMI).
2) Select Floating Point Computational Unit under the System Configuration
heading and change it from disabled to what is needed: staggered (run once per
core each day) or periodic (a specified time).
3) Click "Save Settings".
* A problem was fixed for a system termination with SRC B700F107 following a
time facility processor failure with SRC B700F10B. With the fix, the
transparent replacement of the failed processor will occur for the B700F10B if
there is a free core, with no impact to the system.
* A problem was fixed for an incorrect "Power Good fault" SRC logged for an
#EMX0 PCIe3 expansion drawer on the lower CXP cable of B7006A85 (AOCABLE,
PCICARD). The correct SRC is B7006A86 (PCICARD, AOCABLE).
* A problem was fixed for certain SR-IOV adapters that can have B400FF02 SRCs
logged with LPA dumps during a vNIC remove operation. In most cases, the
operations should recover and complete. This fix updates the adapter firmware
to XX.29.2003 for the following Feature Codes and CCINs: #EC2R/EC2S with CCIN
58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CE; and #EC66/EC67 with
CCIN 2CF3.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* A problem was fixed for certain SR-IOV adapters not being able to create
the maximum number of VLANs that are supported for a physical port. The SR-IOV
adapters affected have the following Feature Codes and CCINs: #EC66/#EC67 with
CCIN 2CF3.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* A problem was fixed for an SR-IOV adapter in shared mode configured as
Virtual Ethernet Port Aggregator (VEPA) where unmatched unicast packets were
not forwarded to the promiscuous mode VF. This is an updated and corrected
version of a similar fix delivered in the FW940.40 service pack that had side
effects of network issues such as ping failure or inability to establish TCP
connections.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* The following problems were fixed for certain SR-IOV adapters:
1) An error was fixed that occurs during a VNIC failover where the VNIC
backing device has a physical port down due to an adapter internal error with
an SRC B400FF02 logged. This is an improved version of the fix delivered in
earlier service pack FW930.40 for adapter firmware 11.4.415.36 and it
significantly reduces the frequency of the error being fixed.
2) A problem was fixed for an adapter in SR-IOV shared mode which may cause a
network interruption and SRCs B400FF02 and B400FF04 logged. The problem occurs
infrequently during normal network traffic.
The fixes update the adapter firmware to 11.4.415.38 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* A problem was fixed for the Device Description in a System Plan related to
Crypto Coprocessors and NVMe cards that were only showing the PCI vendor and
device ID of the cards. This is not enough information to verify which card is
installed without looking up the PCI IDs first. With the fix, more
specific/useful information is displayed and this additional information does
not have any adverse impact on sysplan operations. The problem is seen every
time a System Plan is created for an installed Crypto Coprocessor or NVMe card.
* A problem was fixed for possible partition errors following a concurrent
firmware update from FW910 or later. A precondition for this problem is that
DLPAR operations of either physical or virtual I/O devices must have occurred
prior to the firmware update. The error can take the form of a partition crash
at some point following the update. The frequency of this problem is low. If
the problem occurs, the OS will likely report a DSI (Data Storage Interrupt)
error. For example, AIX produces a DSI_PROC log entry. If the partition does
not crash, it is also possible that some subsequent I/O DLPAR operations will
fail.
* A problem was fixed for some serviceable events specific to the reporting
of EEH errors not being displayed on the HMC. The sending of an associated
call home event, however, was not affected. This problem is intermittent and
infrequent. System firmware changes that affect certain systems
* For a system with a partition running AIX 7.3, a problem was fixed for
running Live Update or Live Partition Mobility (LPM). AIX 7.3 supports Virtual
Persistent Memory (PMEM) but it cannot be used with these operations, but the
problem was making it appear that PMEM was configured when it was not. The
Live Update and LPM operations always fail when attempted on AIX 7.3. Here is
the failure output from a Live Update Preview:
"1430-296 FAILED: not all devices are virtual devices.
nvmem0
1430-129 FAILED: The following loaded kernel extensions are not known to be
safe for Live Update:
nvmemdd
...
1430-218 The live update preview failed.
0503-125 geninstall: The lvupdate call failed.
Please see /var/adm/ras/liveupdate/logs/lvupdlog for details."
* On systems with only Integrated Facility for Linux ( IFL) processors and
AIX partitions, a problem was fixed for performance issues for IFL VMs (Linux
and VIOS). This problem occurs if AIX partitions are active on a system with
IPL only cores. As a workaround, AIX partitions should not be activated on an
IFL only system. With the fix, the activation of AIX partitions is blocked on
an IFL only system. If this fix is installed concurrently with AIX partitions
running, these partitions will be allowed to continue to run until they are
powered off. Once powered off, the AIX partitions will not be allowed to be
activated again on the IFL-only system. VM930_139_035 / FW930.41
5/25/21 Impact: Availability Severity: HIPER
System firmware changes that affect all systems
* HIPER/Pervasive: A problem was fixed to be able to detect a failed PFET
sensing circuit in a core at runtime, and prevent a system fail with an
incomplete state when a core fails to wake up. The failed core is detected on
the subsequent IPL. With the fix. a core is called out with the PFET failure
with SRC BC13090F and hardware description "CME detected malfunctioning of PFET
headers." to isolate the error better with a correct callout.
* HIPER/Pervasive: A problem was fixed for a checkstop due to an internal
Bus transport parity error or a data timeout on the Bus. This is a very rare
problem that requires a particular SMP transport link traffic pattern and
timing. Both the traffic pattern and timing are very difficult to achieve with
customer application workloads. The fix will have no measurable effect on most
customer workloads although highly intensive OLAP-like workloads may see up to
2.5% impact. VM930_134_035 / FW930.40
3/10/21 Impact: Availability Severity: SPE
New features and functions
* Added support in ASMI for a new panel to do Self -Boot Engine (SBE)
SEEPROM validation. This validation can only be run at the service processor
standby state.
If the validation detects a problem, IBM recommends the system not be used
and that IBM service be called. System firmware changes that affect all systems
* DEFERRED: A problem was fixed for a rare Voltage Regulator Module (VRM)
power fault with an SRC 11002700 logged for the VRM failure followed by an SRC
11002610 system crash. The trigger for this problem is intense workloads that
cause what appear to be input over-current conditions. A re-IPL of the system
is needed to activate this fix.
* A problem was fixed for the On-Chip Controller (OCC) going into safe mode
(causes loss of processor performance) with SRC BC702616 logged. This problem
can be triggered by the loss of a power supply (an oversubscription event).
The problem can be circumvented by fixing the issue with the power supply.
* A problem was fixed for certain SR-IOV adapters that have a rare,
intermittent error with B400FF02 and B400FF04 logged, causing a reboot of the
VF. The error is handled and recovered without any user intervention needed.
The SR-IOV adapters affected have the following Feature Codes and CCINs:
#EC2R/#EC2S with CCIN 58FA; #EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN
2CE; and #EC66/#EC67 with CCIN 2CF3.
* A problem was fixed for not logging SRCs for certain cable pulls from the
#EMXO PCIe expansion drawer. With the fix, the previously undetected cable
pulls are now detected and logged with SRC B7006A8B and B7006A88 errors.
* A problem was fixed for a rare system hang with SRC BC70E540 logged that
may occur when adding processors through licensing or the system throttle state
changing (becoming throttled or unthrottled) on an Enterprise Pool system. The
trigger for the problem is a very small timing window in the hardware as the
processor loads are changing.
* A problem was fixed for the Systems Management Services ( SMS) menu "Device
IO Information" option being incorrect when displaying the capacity for an NVMe
or Fibre Channel (FC) NVMe disk. This problem occurs every time the data is
displayed.
* A problem was fixed for an unrecoverable UE SRC B181BE12 being logged if a
service processor message acknowledgment is sent to a Hostboot instance that
has already shutdown. This is a harmless error log and it should have been
marked as an informational log.
* A problem was fixed for Time of Day (TOD) being lost for the real-time
clock (RTC) when the system initializes from AC power off to service processor
standby state with an SRC B15A3303 logged. This is a very rare problem that
involves a timing problem in the service processor kernel that can be recovered
by setting the system time with ASMI.
* A problem was fixed for intermittent failures for a reset of a Virtual
Function (VF) for SR-IOV adapters during Enhanced Error Handling (EEH) error
recovery. This is triggered by EEH events at a VF level only, not at the
adapter level. The error recovery fails if a data packet is received by the VF
while the EEH recovery is in progress. A VF that has failed can be recovered
by a partition reboot or a DLPAR remove and add of the VF.
* A problem was fixed for performance degradation of a partition due to task
dispatching delays. This may happen when a processor chip has all of its
shared processors removed and converted to dedicated processors. This could be
driven by DLPAR remove of processors or Dynamic Platform Optimization (DPO).
* The following problems were fixed for certain SR-IOV adapters:
1) An error was fixed that occurs during VNIC failover where the VNIC backing
device has a physical port down with an SRC B400FF02 logged.
2) A problem was fixed for adding a new logical port that has a PVID assigned
that is causing traffic on that VLAN to be dropped by other interfaces on the
same physical port which uses OS VLAN tagging for that same VLAN ID. This
problem occurs each time a logical port with a non-zero PVID that is the same
as an existing VLAN is dynamically added to a partition or is activated as part
of a partition activation, the traffic flow stops for other partitions with OS
configured VLAN devices with the same VLAN ID. This problem can be recovered
by configuring an IP address on the logical port with the non-zero PVID and
initiating traffic flow on this logical port. This problem can be avoided by
not configuring logical ports with a PVID if other logical ports on the same
physical port are configured with OS VLAN devices.
This fix updates the adapter firmware to 11.4.415.36 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for incomplete periodic data gathered by IBM Service
for #EMXO PCIe expansion drawer predictive error analysis. The service data
is missing the PLX (PCIe switch) data that is needed for the debug of certain
errors.
* A problem was fixed for a partition hang in shutdown with SRC B200F00F
logged. The trigger for the problem is an asynchronous NX accelerator job (such
as gzip or NX842 compression) in the partition that fails to clean up
successfully. This is intermittent and does not cause a problem until a
shutdown of the partition is attempted. The hung partition can be recovered by
performing an LPAR dump on the hung partition. When the dump has been
completed, the partition will be properly shut down and can then be restarted
without any errors.
VM930_116_035 / FW930.30
10/21/20 Impact: Data Severity: HIPER
New features and functions
* DEFERRED: Host firmware support for anti-rollback protection. This feature
implements firmware anti-rollback protection as described in NIST SP 800-147B
"BIOS Protection Guidelines for Servers". Firmware is signed with a "secure
version". Support added for a new menu in ASMI called "Host firmware security
policy" to update this secure version level at the processor hardware. Using
this menu, the system administrator can enable the "Host firmware secure
version lock-in" policy, which will cause the host firmware to update the
"minimum secure version" to match the currently running firmware. Use the
"Firmware Update Policy" menu in ASMI to show the current "minimum secure
version" in the processor hardware along with the "Minimum code level
supported" information. The secure boot verification process will block
installing any firmware secure version that is less than the "minimum secure
version" maintained in the processor hardware.
Prior to enabling the "lock-in" policy, it is recommended to accept the
current firmware level.
WARNING: Once lock-in is enabled and the system is booted, the "minimum
secure version" is updated and there is no way to roll it back to allow
installing firmware releases with a lesser secure version.
* Enable periodic logging of internal component operational data for the
PCIe3 expansion drawer paths. The logging of this data does not impact the
normal use of the system. System firmware changes that affect all systems
* HIPER/Pervasive: A problem was fixed for certain SR-IOV adapters for a
condition that may result from frequent resets of adapter Virtual Functions
(VFs), or transmission stalls and could lead to potential undetected data
corruption.
The following additional fixes are also included:
1) The VNIC backing device goes to a powered off state during a VNIC failover
or Live Partition Mobility (LPM) migration. This failure is intermittent and
very infrequent.
2) Adapter time-outs with SRC B400FF01 or B400FF02 logged.
3) Adapter time-outs related to adapter commands becoming blocked with SRC
B400FF01 or B400FF02 logged
4) VF function resets occasionally not completing quickly enough resulting in
SRC B400FF02 logged.
This fix updates the adapter firmware to 11.4.415.33 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* DEFERRED: A problem was fixed for a slow down in PCIe adapter performance
or loss of adapter function caused by a reduction in interrupts available to
service the adapter. This problem can be triggered over time by partition
activations or DLPAR adds of PCIe adapters to a partition. This fix must be
applied and the system re-IPLed for existing adapter performance problems to be
resolved.
* A rare problem was fixed for a checkstop during an IPL that fails to
isolate and guard the problem core. An SRC is logged with B1xxE5xx and an
extended hex word 8 xxxxDD90. With the fix, the suspected failing hardware is
guarded.
* A problem was fixed to allow quicker recovery of PCIe links for the #EMXO
PCIe expansion drawer for a run-time fault with B7006A22 logged. The time for
recovery attempts can exceed six minutes on rare occasions which may cause I/O
adapter failures and failed nodes. With the fix, the PCIe links will recover
or fail faster (in the order of seconds) so that redundancy in a cluster
configuration can be used with failure detection and failover processing by
other hosts, if available, in the case where the PCIe links fail to recover.
* A problem was fixed for system memory not returned after create and delete
of partitions, resulting in slightly less memory available after configuration
changes in the systems. With the fix, an IPL of the system will recover any of
the memory that was orphaned by the issue.
* A problem was fixed for certain SR-IOV adapters that do not support the
"Disable Logical Port" option from the HMC but the HMC was allowing the user to
select this, causing incorrect operation. The invalid state of the logical
port causes an "Enable Logical Port" to fail in a subsequent operation. With
the fix, the HMC provides the message that the "Disable Logical Port" is not
supported for the adapter. This affects the adapters with the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
* A problem was fixed for SR-IOV adapters having an SRC B400FF04 logged when
a VF is reset. This is an infrequent issue and can occur for a Live Partition
Mobility migration of a partition or during vNIC (Virtual Network Interface
Controller) failovers where many resets of VFs are occurring. This error is
recovered automatically with no impact on the system.
* A problem was fixed to remove unneeded resets of a Virtual Function (VF)
for SR-IOV adapters, providing for improved performance of the startup or
recovery time of the VF. This performance difference may be noticed during a
Live Partition Mobility migration of a partition or during vNIC (Virtual
Network Interface Controller) failovers where many resets of VFs are occurring.
* A problem was fixed for TPM hardware failures not causing SRCs to logged
with a call out if the system is configured in ASMI to not require TPM for the
IPL. If this error occurs, the user would not find out about it until they
needed to run with TPM on the IPL. With the fix, the error logs and
notifications will occur regardless of how the TPM is configured.
* A problem was fixed for PCIe resources under a deconfigured PCIe Host
Bridge (PHB) being shown on the OS host as available resources when they should
be shown as deconfigured. While this fix can be applied concurrently, a re-IPL
of the system is needed to correct the state of the PCIe resources if a PHB had
already been deconfigured.
* A problem was fixed for the REST/Redfish interface to change the success
return code for object creation from "200" to "201". The "200" status code
means that the request was received and understood and is being processed. A
"201" status code indicates that a request was successful and, as a result, a
resource has been created. The Redfish Ruby Client, "redfish_client" may fail
a transaction if a "200" status code is returned when "201" is expected.
* A problem was fixed for a concurrent maintenance "Repair and Verify" (R&V)
operation for a #EMX0 fanout module that fails with an "Unable to isolate the
resource" error message. This should occur only infrequently for cases where a
physical hardware failure has occurred which prevents access to slot power
controls. This problem can be worked around by bringing up the "PCIe Hardware
Topology" screen from either ASMI or the HMC after the hardware failure but
before the concurrent repair is attempted. This will avoid the problem with
the PCIe slot isolation These steps can also be used to recover from the
error to allow the R&V repair to be attempted again.
* A problem was fixed for certain large I/O adapter configurations having the
PCI link information truncated on the PCI-E topology display shown with ASMI
and the HMC. Because of the truncation, individual adapters may be missing on
the PCI-E topology screens.
* A problem was fixed for a rare system hang that can occur when a page of
memory is being migrated. Page migration (memory relocation) can occur for a
variety of reasons, including predictive memory failure, DLPAR of memory, and
normal operations related to managing the page pool resources.
* A problem was fixed for utilization statistics for commands such as HMC
lslparutil and third-party lpar2rrd that do not accurately represent CPU
utilization. The values are incorrect every time for a partition that is
migrated with Live Partition Mobility (LPM). Power Enterprise Pools 2.0 is not
affected by this problem. If this problem has occurred, here are three
possible recovery options:
1) Re-IPL the target system of the migration.
2) Or delete and recreate the partition on the target system.
3) Or perform an inactive migration of the partition. The cycle values get
zeroed in this case.
* A problem was fixed for running PCM on a system with SR-IOV adapters in
shared mode that results in an "Incomplete" system state with certain
hypervisor tasks deadlocked. This problem is rare and is triggered when using
SR-IOV adapters in shared mode and gathering performance statistics with PCM
(Performance Collection and Monitoring) and also having a low level error on an
adapter. The only way to recover from this condition is to re-IPL the system.
* A problem was fixed for an enhanced PCIe expansion drawer FPGA reset
causing EEH events from the fanout module or cable cards that disrupt the PCIe
lanes for the PCIe adapters. This problem affects systems with the PCIe
expansion drawer enhanced fanout module (#EMXH) and the enhanced cable card (
#EJ20).
The error is associated with the following SRCs being logged:
B7006A8D with PRC 37414123 (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::LocalFpgaHwReset)
B7006A8E with PRC 3741412A (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::RemoteFpgaHwReset)
If the EEH errors occur, the OS device drivers automatically recover but with
a reset of affected PCIe adapters that would cause a brief interruption in the
I/O communications.
* A problem was fixed for the FRU callout lists for SRCs B7006A2A and
B7006A2B possibly not including the FRU containing the PCIe switch as the
second FRU in the callout list. The card/drive in the slot is the first
callout and the FRU containing the PCIe switch should be the second FRU in the
callout list. This problem occurs when the PCIe slot is on a different planar
that the PCIe switch backing the slot. This impacts the NVMe backplanes (P2
with slots C1-C4) hosting the PCIe backed SSD NVMe U.2 modules that have
feature codes #EC5J and #EC5K. As a workaround for B7006A2A and B7006A2B
errors where the callout FRU list is processed and the problem is not resolved,
consider replacing the backplane (which includes the PCIe switch) if this was
omitted in the FRU callout list.
* A problem was fixed for a PCIe3 expansion drawer cable that has hidden
error logs for a single lane failure. This happens whenever a single lane error
occurs. Subsequent lane failures are not hidden and have visible error logs.
Without the fix, the hidden or informational logs would need to be examined to
gather more information for the failing hardware.
* A problem was fixed for mixing modes on the ports of SR-IOV adapters that
causes SRCs B200A161, B200F011, B2009014 and B400F104 to be logged on boot of
the failed adapter. This error happens when one port of the adapter is changed
to option 1 with a second port set at either option 0 or option 2. The error
can be cleared by taking the adapter out of SR-IOV shared mode. The SR-IOV
adapters affected have the following Feature Codes and CCINs: #EC2R/#EC2S with
CCIN 58FA; #EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN 2CE; and
#EC66/#EC67 with CCIN 2CF3.
* A problem was fixed for a partition configured with a large number
(approximately 64) of Virtual Persistent Memory (PMEM) LUNs hanging during the
partition activation with a CA00E134 checkpoint SRC posted. Partitions
configured with approximately 64 PMEM LUNs will likely hang and the greater the
number of LUNs, the greater the possibility of the hang. The circumventionf or
this problem is to reduce the number of PMEM LUNs to 64 or less in order to
boot successfully. The PMEM LUNs are also known as persistent memory volumes
and can be managed using the HMC. For more information on this topic, refer to
https://www.ibm.com/support/knowledgecenter/POWER9/p9efd/p9efd_lpar_pmem_settings.htm
.
* A problem was fixed for non-optimal On-Chip Controller (OCC) processor
frequency adjustments when system power limits or user power caps are
exceeded. When a workload causes power limits or caps to be exceeded, there
can be large frequency swings for the processors and a processor chip can get
stuck at minimum frequency. With the fix, the OCC now waits for new power
readings when changing the processor frequency and uses a master power capping
frequency to keep all processors at the same frequency. As a workaround for
this problem, do not set a power cap or run a workload that would exceed the
system power limit.
* A problem was fixed for mixing memory DIMMs with different timings
(different vendors) under the same memory controller that fail with an SRC
BC20E504 error and DIMMs deconfigured. This is an "MCBIST_BRODCAST_OUT_OF_SYNC"
error. The loss of memory DIMMs can result in a IPL failure. This problem can
happen if the memory DIMMs have a certain level of timing differences. If the
timings are not compatible, the failure will occur on the IPL during the memory
training. To circumvent this problem, each memory controller should have only
memory DIMMs from the same vendor plugged.
* A problem was fixed for the Self Boot Engine (SBE) going to termination
with an SRC B150BA8D logged when booting on a bad core. Once this happens, this
error will persist as the bad core is not deconfigured. To recover from this
error and be able to IPL, the bad core must be manually deconfigured. With the
fix, the failing core is deconfigured and the SBE is reconfigured to use
another core so the system is able to IPL.
* A problem was fixed for guard clearing where a specific unguard action may
cause other unrelated predictive and manual guards to also be cleared.
* A problem was fixed for an infrequent issue after a Live Partition Mobility
(LPM) operation from a POWER9 system to a POWER8 or POWER7 system. The issue
may cause unexpected OS behavior, which may include loss of interrupts, device
time-outs, or delays in dispatching. Rebooting the affected target partition
will resolve the problem.
* A problem was fixed for a partition crash or hang following a partition
activation or a DLPAR add of a virtual processor. For partition activation,
this issue is only possible for a system with a single partition owning all
resources. For DLPAR add, the issue is extremely rare.
* A problem was fixed for a DLPAR remove of memory from a partition that
fails if the partition contains 65535 or more LMBs. With 16MB LMBs, this error
threshold is 1 TB of memory. With 256 MB LMBs, it is 16 TB of memory. A reboot
of the partition after the DLPAR will remove the memory from the partition.
* A problem was fixed for incorrect run-time deconfiguration of a processor
core with SRC B700F10B. This problem can be circumvented by a reconfiguration
of the processor core but this should only be done with the guidance of IBM
Support to ensure the core is good.
* A problem was fixed for Live Partition Mobility (LPM) being shown as
enabled at the OS when it has been disabled by the ASMI command line using the
server processor command of "cfcuod -LPM OFF". LPM is actually disabled and
the status shows correctly on the HMC. The status on the OS can be ignored
(for example as shown by the AIX command "lparstat -L") as LPM will not be
allowed to run when it is disabled.
* A problem was fixed for a VIOS, AIX, or Linux partition hang during an
activation at SRC CA000040. This will occur on a system that has been running
more than 814 days when the boot of the partition is attempted if the
partitions are in POWER9_base or POWER9 processor compatibility mode.
A workaround to this problem is to re-IPL the system or to change the failing
partition to POWER8 compatibility mode.
* A problem was fixed for performance tools perfpmr, tprof and pex that may
not be able to collect data for the event-based options.
This can occur any time an OS thread becomes idle. When the processor cores
are assigned to the next active process, the performance registers may be
disabled.
* A problem was fixed for a system hang and HMC "Incomplete" state that may
occur when a partition hangs in shutdown with SRC B200F00F logged. The trigger
for the problem is an asynchronous NX accelerator job (such as gzip) in the
partition that fails to clean up successfully. This is intermittent and does
not cause a problem until a shutdown of the partition is attempted.
* A problem was fixed for an SRC B7006A99 informational log now posted as a
Predictive with a call out of the CXP cable FRU, This fix improves FRU
isolation for cases where a CXP cable alert causes a B7006A99 that occurs prior
to a B7006A22 or B7006A8B. Without the fix, the SRC B7006A99 is informational
and the latter SRCs cause a larger hardware replacement even though the earlier
event identified a probable cause for the cable FRU.
* A problem was fixed for a security vulnerability for the Self Boot Engine
(SBE). The SBE can be compromised from the service processor to allow
injection of malicious code. An attacker that gains root access to the service
processor could compromise the integrity of the host firmware and bypass the
host firmware signature verification process. This compromised state can not be
detected through TPM attestation. This is Common Vulnerabilities and Exposures
issue number CVE-2021-20487. System firmware changes that affect certain systems
* On systems with AIX and Linux partitions, a problem was fixed for AIX and
Linux partitions that crash or hang when reporting any of the following
Partition Firmware RTAS ASSERT rare conditions:
1) SRC BA33xxxx errors - Memory allocation and management errors.
2) SRC BA29xxxx errors - Partition Firmware internal stack errors.
3) SRC BA00E8xx errors - Partition Firmware initialization errors during
concurrent firmware update or Live Partition Mobility (LPM) operations.
This problem should be very rare. If the problem does occur, a partition
reboot is needed to recover from the error.
VM930_101_035 / FW930.20
02/27/20 Impact: Availability Severity: HIPER
New features and functions
* Support was added for real-time data capture for PCIe3 expansion drawer
(#EMX0) cable card connection data via resource dump selector on the HMC or in
ASMI on the service processor. Using the resource selector string of "xmfr
-dumpccdata" will non-disruptively generate an RSCDUMP type of dump file that
has the current cable card data, including data from cables and the retimers.
System firmware changes that affect all systems
* HIPER/Pervasive: A problem was fixed for a possible system crash and HMC
"Incomplete" state when a logical partition (LPAR) power off after a dynamic
LPAR (DLPAR) operation fails for a PCIe adapter. This scenario is likely to
occur during concurrent maintenance of PCIe adapters or for #EMX0 components
such as PCIe3 Cable adapters, Active Optical or copper cables, fanout modules,
chassis management cards, or midplanes. The DLPAR fail can leave page table
mappings active for the adapter, causing the problems on the power down of the
LPAR. If the system does not crash, the DLPAR will fail if it is retried until
a platform IPL is performed.
* HIPER/Pervasive: A problem was fixed for an HMC "Incomplete" state for a
system after the HMC user password is changed with ASMI on the service
processor. This problem can occur if the HMC password is changed on the
service processor but not also on the HMC, and a reset of the service processor
happens. With the fix, the HMC will get the needed "failed authentication"
error so that the user knows to update the old password on the HMC.
* DEFERRED: A problem was fixed for a processor core failure with SRCs
B150BA3C and BC8A090F logged that deconfigures the entire processor for the
current IPL. A re-IPL of the system will recover the lost processor with only
the bad core guarded.
* A problem was fixed for certain SR-IOV adapters that can have an adapter
reset after a mailbox command timeout error.
This fix updates the adapter firmware to 11.2.211.39 for the following
Feature Codes and CCINs: #EN15/EN16 with CCIN 2CE3, #EN17/EN18 with CCIN 2CE4,
#EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with CCIN 2CC0, and #EN0K/EN0L with CCIN
2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for an SR-IOV adapter failure with B400FFxx errors
logged when moving the adapter to shared mode. This is an infrequent race
condition where the adapter is not yet ready for commands and it can also occur
during EEH error recovery for the adapter. This affects the SR-IOV adapters
with the following feature codes and CCINs: #EC2R/EC2S with CCIN 58FA;
#EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN
2CF3.
* A problem was fixed for an IPL failure with the following possible SRCs
logged: 11007611, 110076x1, 1100D00C, and 110015xx. The service processor may
reset/reload for this intermittent error and end up in the termination state.
* A problem was fixed for delayed interrupts on a Power9 system following a
Live Partition Mobility operation from a Power7 or Power8 system. The delayed
interrupts could cause device time-outs, program dispatching delays, or other
device problems on the target Power9 system.
* A problem was fixed for processor cores not being able to be used by
dedicated processor partitions if they were DLPAR removed from a dedicated
processor partition. This error can occur if there was a firmware assisted
dump or a Live Partition Mobility (LPM) operation after the DLPAR of the
processor. A re-IPL of the system will recover the processor cores.
* A problem was fixed for a B7006A96 fanout module FPGA corruption error that
can occur in unsupported PCIe3 expansion drawer(#EMX0) configurations that mix
an enhanced PCIe3 fanout module (#EMXH) in the same drawer with legacy PCIe3
fanout modules (#EMXF, #EMXG, #ELMF, or #ELMG). This causes the FPGA on the
enhanced #EMXH to be updated with the legacy firmware and it becomes a
non-working and unusable fanout module. With the fix, the unsupported #EMX0
configurations are detected and handled gracefully without harm to the FPGA on
the enhanced fanout modules.
* A problem was fixed for lost interrupts that could cause device time-outs
or delays in dispatching a program process. This can occur during memory
operations that require a memory relocation for any partition such as mirrored
memory defragmentation done by the HMC optmem command, or memory guarding that
happens as part of memory error recovery during normal operations of the system.
* A problem was fixed for extraneous informational logging of SRC B7006A10
("Insufficient SR-IOV resources available") with a 1306 PRC. This SRC is
logged whenever an SR-IOV adapter is moved from dedicated mode to shared mode.
This SRC with the 1306 PRC should be ignored as no action is needed and there
is no issue with SR-IOV resources.
* A problem was fixed for a hypervisor error during system shutdown where a
B7000602 SRC is logged and the system may also briefly go "Incomplete" on the
HMC but the shutdown is successful. The system will power back on with no
problems so the SRC can be ignored if it occurred during a shutdown.
* A problem was fixed for possible dispatching delays for partitions running
in POWER8, POWER9_base or POWER9 processor compatibility mode.
* A problem was fixed for extraneous B400FF01 and B400FF02 SRCs logged when
moving cables on SR-IOV adapters. This is an infrequent error that can occur
if the HMC performance monitor is running at the same time the cables are
moved. These SRCs can be ignored when accompanied by cable movement.
VM930_093_035 / FW930.11
12/11/19 Impact: Availability Severity: SPE
System firmware changes that affect all systems
* DEFERRED: PARTITION_DEFERRED: A problem was fixed for vHMC having no
useable local graphics console when installed on FW930.00 and later partitions.
* A problem was fixed for an IPMI core dump and SRC B181720D logged, causing
the service processor to reset due to a low memory condition. The memory loss
is triggered by frequently using the ipmitool to read the network
configuration. The service processor recovers from this error but if three of
these errors occur within a 15 minute time span, the service processor will go
to a failed hung state with SRC B1817212 logged. Should a service processor
hang occur, OS workloads will continue to run but it will not be possible for
the HMC to interact with the partitions. This service processor hung state can
be recovered by doing a re-IPL of the system with a scheduled outage.
* A problem was fixed for the Advanced System Management Interface (ASMI)
menu for "PCIe Hardware Topology/Reset link" showing the wrong value. This
value is always wrong without the fix.
* A problem was fixed for PLL unlock error with SRC B124E504 causing a
secondary error of PRD Internal Firmware Software Fault with SRC B181E580 and
incorrect FRU call outs.
* A problem was fixed for an initialization failure of certain SR-IOV adapter
ports during its boot, causing a B400FF02 SRC to be logged. This is a rare
problem and it recovers automatically by the reboot of the adapter on the
error. This problem affects the SR-IOV adapters with the following feature
codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB;
#EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3.
* A problem was fixed for the SR-IOV Virtual Functions (VFs) when the
multi-cast promiscuous flag has been turned on for the VF. Without the fix,
the VF device driver sometimes may erroneously fault when it senses that the
multi-cast promiscuous mode had not been achieved although it had been
requested.
* A problem was fixed for SR-IOV adapters to provide a consistent
Informational message level for cable plugging issues. For transceivers not
plugged on certain SR-IOV adapters, an unrecoverable error (UE) SRC B400FF03
was changed to an Informational message logged. This affects the SR-IOV
adapters with the following feature codes and CCINs: #EC2R/EC2S with CCIN
58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67
with CCIN 2CF3.
For copper cables unplugged on certain SR-IOV adapters, a missing message was
replaced with an Informational message logged. This affects the SR-IOV
adapters with the following feature codes and CCINs: #EN17/EN18 with CCIN 2CE4,
and #EN0K/EN0L with CCIN 2CC1.
* A problem was fixed for incorrect DIMM callouts for DIMM over-temperature
errors. The error log for the DIMM over temperature will have incorrect FRU
callouts, either calling out the wrong DIMM or the wrong DIMM controller memory
buffer.
* A problem was fixed for an Operations Panel hang after using it set LAN
Console as the console type for several iterations. After several iterations,
the operations panel may hang with "Function 41" displayed on the operations
panel. A hot unplug and plug of the operations panel can be used to recover it
from the hang.
* A problem was fixed for shared processor pools where uncapped shared
processor partitions placed in a pool may not be able to consume all available
processor cycles. The problem may occur when the sum of the allocated
processing units for the pool member partitions equals the maximum processing
units of the pool.
* A problem was fixed for Novalink failing to activate partitions that have
names with character lengths near the maximum allowed character length. This
problem can be circumvented by changing the partition name to have 32
characters or less.
* A problem was fixed where a Linux or AIX partition type was incorrectly
reported as unknown. Symptoms include: IBM Cloud Management Console (CMC) not
being able to determine the RPA partition type (Linux/AIX) for partitions that
are not active; and HMC attempts to dynamically add CPU to Linux partitions may
fail with a HSCL1528 error message stating that there are not enough Integrated
Facility for Linux ( IFL) cores for the operation.
* A problem was fixed for a hypervisor hang that can occur on the target side
when doing a Live Partition Mobility (LPM) migration from a system that does
not support encryption and compression of LPM data. If the hang occurs, the
HMC will go to an "Incomplete" state for the target system. The problem is
rare because the data from the source partition must be in a very specific
pattern to cause the failure. When the failure occurs, a B182951C will be
logged on the target (destination) system and the HMC for the source partition
will issue the following message: "HSCLA318 The migration command issued to
the destination management console failed with the following error: HSCLA228
The requested operation cannot be performed because the managed system is not in the Standby or Operating state.". To recover, the target
system must be re-IPLed.
* A problem was fixed for performance collection tools not collecting data
for event-based options. This fix pertains to perfpmr and tprof on AIX, and
Performance Explorer (PEX) on IBM i.
* A problem was fixed a Live Partition Mobility (LPM) migration of a large
memory partition to a target system that causes the target system to crash and
for the HMC to go to the "Incomplete" state. For servers with the default LMB
size (256MB), if the partition is >=16TB and if the desired memory is different
than the maximum memory, LPM may fail on the target system. Servers with LMB
sizes less than the default could hit this problem with smaller memory
partition sizes. A circumvention to the problem is to set the desired and
maximum memory to the same value for the large memory partition that is to be
migrated.
* A problem was fixed for certain SR-IOV adapters with the following issues:
1) If the SR-IOV logical port's VLAN ID (PVID) is modified while the logical
port is configured, the adapter will use an incorrect PVID for the Virtual
Function (VF). This problem is rare because most users do not change the PVID
once the logical port is configured, so they will not have the problem.
2) Adapters with an SRC of B400FF02 logged.
This fix updates the adapter firmware to 11.2.211.38 for the following
Feature Codes and CCINs: #EN15/EN16 with CCIN 2CE3, #EN17/EN18 with CCIN 2CE4,
#EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with CCIN 2CC0, and #EN0K/EN0L with CCIN
2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for certain SR-IOV adapters where after some error
conditions the adapter may hang with no messages or error recovery. This is a
rare problem for certain severe adapter errors. This problem affects the SR-IOV
adapters with the following feature codes: #EC66/EC67 with CCIN 2CF3. This
problem can be recovered by removing the adapter from SR-IOV mode and putting
it back in SR-IOV mode, or the system can be re-IPLed.
* A problem was fixed for an initialization failure of certain SR-IOV
adapters when changed into SR-IOV mode. This is an infrequent problem that
most likely can occur following a concurrent firmware update when the adapter
also needs to be updated. This problem affects the SR-IOV adapter with the
following feature codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with
CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3. This
problem can be recovered by removing the adapter from SR-IOV mode and putting
it back in SR-IOV mode, or the system can be re-IPLed.
* A problem was fixed for a rare IPL failure with SRCs BC8A090F and BC702214
logged caused by an overflow of VPD repair data for the processor cores. A
re-IPL of the system should recover from this problem.
* A problem was fixed for a false memory error that can be logged during the
IPL with SRC BC70E540 with the description "mcb(n0p0c1) (MCBISTFIR[12])
WAT_DEBUG_ATTN" but with no hardware call outs. This error log can be ignored.
* A problem was fixed for an IPL failure after installing DIMMs of different
sizes, causing memory access errors. Without the fix, the memory configuration
should be restored to only use DIMMs of the same size.
* A problem was fixed for a memory DIMM plugging rule violation that causes
the IPL to terminate with an error log with RC_GET_MEM_VPD_UNSUPPORTED_CONFIG
IPL that calls out the memory port but has no DIMM call outs and no DIMM
deconfigurations are done. With the fix, the DIMMs that violate the plugging
rules will be deconfigured and the IPL will complete. Without the fix, the
memory configuration should be restored to the prior working configuration to
allow the IPL to be successful.
* A problem was fixed for a B7006A22 Recoverable Error for the enhanced PCIe3
expansion drawer (#EMX0) I/O drawer with fanout PCIe Six Slot Fan Out modules
(#EMXH) installed. This can occur up to two hours after an IPL from power off.
This can be a frequent occurrence on an IPL for systems that have the PCIe
Six Slot Fan Out module (#EMXH). The error is automatically recovered at the
hypervisor level. If an LPAR fails to start after this error, a restart of the
LPAR is needed.
* A problem was fixed for degraded memory bandwidth on systems with memory
that had been dynamically repaired with symbols to mark the bad bits.
* A problem was fixed for an intermittent IPMI core dump on the service
processor. This occurs only rarely when multiple IPMI sessions are starting
and cleaning up at the same time. A new IPMI session can fail initialization
when one of its session objects is inadvertently cleaned up. The circumvention
is to retry the IPMI command that failed.
* A problem was fixed for an intermittent IPL failure with SRC B181E540
logged with fault signature " ex(n2p1c0) (L2FIR[13]) NCU Powerbus data
timeout". No FRU is called out. The error may be ignored since the automatic
re-IPL is successful. The error occurs very infrequently. This is the second
iteration of the fix that has been released. Expedient routing of the Powerbus
interrupts did not occur in all cases in the prior fix, so the timeout problem
was still occurring. System firmware changes that affect certain systems
* On systems with PCIe3 expansion drawers(feature code #EMX0), a problem was
fixed for a concurrent exchange of a PCIe expansion drawer cable card, although
successful, leaves the fault LED turned on.
* On systems with 16TB or more of memory, a problem was fixed for certain
SR-IOV adapters not being able to start a Virtual Function (VF) if "I/O Adapter
Enlarged Capacity" is enabled and VF option 0 has been selected for the number
of supported VFs . This problem affects the SR-IOV adapters with the
following feature codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with
CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3. This
problem can be circumvented by the following action: change away from VF
option 0. VF option 1 is the default option and it will work.
* On systems with 16GB huge-pages, a problem was fixed for certain SR-IOV
adapters with all or nearly all memory assigned to them preventing a system
IPL. This affects the SR-IOV adapters with the following feature codes and
CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with
CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3. The problem can be circumvented by
powering off the system and turning off all the huge-page allocations.
VM930_068_035 / FW930.03
08/22/19 Impact: Data Severity: HIPER
System firmware changes that affect all systems
* HIPER/Pervasive: A change was made to fix an intermittent processor
anomaly that may result in issues such as operating system or hypervisor
termination, application segmentation fault, hang, or undetected data
corruption. The only issues observed to date have been operating system or
hypervisor terminations.
* A problem was fixed for a very intermittent partition error when using Live
Partition Mobility (LPM) or concurrent firmware update. For a mobility
operation, the issue can result in a partition crash if the mobility target
system is FW930.00, FW930.01 or FW930.02. For a code update operation, the
partition may hang. The recovery is to reboot the partition after the crash or
hang.
VM930_048_035 / FW930.02
06/28/19 Impact: Availability Severity: SPE
System firmware changes that affect all systems
* A problem was fixed for a bad link for the PCIe3 expansion drawer (#EMX0)
I/O drawer with the clock enhancement causing a system failure with B700F103.
This error could occur during an IPL or a concurrent add of the link hardware.
* A problem was fixed for On-Chip Controller (OCC) power capping operation
time-outs with SRC B1112AD3 that caused the system to enter safe mode,
resulting in reduced performance. The problem only occurred when the system
was running with high power consumption, requiring the need for OCC power
capping.
* A problem was fixed for the "PCIe Topology " option to get cable
information in the HMC or ASMI that was returning the wrong cable part numbers
if the PCIe3 expansion drawer (#EMX0) I/O drawer clock enhancement was
configured. If cables with the incorrect part numbers are used for an enhanced
PCIe3 expansion drawer configuration, the hypervisor will log a B7006A20 with
PRC 4152 indicating an invalid configuration -
https://www.ibm.com/support/knowledgecenter/9080-M9S/p9eai/B7006A20.htm.
* A problem was fixed for a drift in the system time (time lags and the clock
runs slower than the true value of time) that occurs when the system is powered
off to the service processor standby state. To recover from this problem, the
system time must be manually corrected using the Advanced System Management
Interface (ASMI) before powering on the system. The time lag increases in
proportion to the duration of time that the system is powered off.
VM930_035_035 / FW930.00
05/17/19 Impact: New Severity: New
All features and fixes from the FW920.30 service pack (and below) are
included in this release.
New Features and Functions
* Support was added to allow the FPGA soft error checking on the PCIe I/O
expansion drawer (#EMX0) to be disabled with the help of IBM support using the
hypervisor "xmsvc" macro. This new setting will persist until it it is changed
by the user or IBM support. The effect of disabling FPGA soft error checking
is to eliminate the FPGA soft error recovery which causes a recoverable PCIe
adapter outage. Some of the soft errors will be hidden by this change but
others may have unpredictable results, so this should be done only under
guidance of IBM support.
* Support for the PCIe3 expansion drawer (#EMX0) I/O drawer clock enhancement
so that a reset of the drawer does not affect the reference clock to the
adapters so the PCIe lanes for the PCIe adapters can keep running through an
I/O drawer FPGA reset. To use this support, new cable cards, fanout modules,
and optical cables are needed after this support is installed: PCIe Six Slot
Fan out module(#EMXH) - only allowed to be connected to converter adapter cable
card; PCIe X16 to CXP Optical or CU converter adapter for the expansion drawer
(#EJ20); and new AOC cables with feature/part number of #ECCR/78P6567,
#ECCX/78P6568, #ECCY/78P6569, and #ECCZ/78P6570. These parts cannot be install
concurrently, so a scheduled outage is needed to complete the migration.
* Support added for RDMA Over Converged Ethernet (RoCE) for SR-IOV adapters.
* Support added for SMS menu to enhance the I/O information option to have
"vscsi" and "network" options. The information shown for "vscsi" devices is
similar to that provided for SAS and Fibre Channel devices. The "network"
option provides connectivity information for the adapter ports and shows which
can be used for network boots and installs.
* Support added to monitor the thermal sensors on the NVMe SSD drives
(feature codes #EC5J, #EC5K, #EC5L) and use that information to adjust the
speed of the system fans for improved cooling of the SSD drives.
* Support added to allow integrated USB ports to be disabled. This is
available via an Advanced System Management Interface (ASMI) menu option:
"System Configuration -> Security -> USB Policy". The USB disable policy, if
selected, does not apply to pluggable USB adapters plugged into PCIe slots such
as the 4-Port USB adapter (#EC45/#EC46), which are always enabled.
System firmware changes that affect all systems
* A problem was fixed for a system IPLing with an invalid time set on the
service processor that causes partitions to be reset to the Epoch date of
01/01/1970. With the fix, on the IPL, the hypervisor logs a B700120x when the
service processor real time clock is found to be invalid and halts the IPL to
allow the time and date to be corrected by the user. The Advanced System
Management Interface (ASMI) can be used to correct the time and date on the
service processor. On the next IPL, if the time and date have not been
corrected, the hypervisor will log a SRC B7001224 (indicating the user was
warned on the last IPL) but allow the partitions to start, but the time and
date will be set to the Epoch value.
* A problem was fixed for a possible boot failure from a ISO/IEC 13346
formatted image, also known as Universal Disk Format (UDF).
UDF is a profile of the specification known as ISO/IEC 13346 and is an open
vendor-neutral file system for computer data storage for a broad range of media
such as DVDs and newer optical disc formats. The failure is infrequent and
depends on the image. In rare cases, the boot code erroneously fails to find a
file in the current directory. If the boot fails on a specific image, the boot
of that image will always fail without the fix.
* A problem was fixed for broadcast bootp installs or boots that fail with a
UDP checksum error.
* A problem was fixed for failing to boot from an AIX mksysb backup on a USB
RDX drive with SRCs logged of BA210012, AA06000D, and BA090010. The boot error
does not occur if a serial console is used to navigate the SMS menus.
4.0 How to Determine The Currently Installed Firmware Level
You can view the server's current firmware level on the Advanced System
Management Interface (ASMI) Welcome pane. It appears in the top right corner.
Example: VM920_123.
----------------------------------------------------------------------------------
5.0 Downloading the Firmware Package
Follow the instructions on Fix Central. You must read and agree to the
license agreement to obtain the firmware packages.
Note: If your HMC is not internet-connected you will need to download the new
firmware level to a USB flash memory device or ftp server.
----------------------------------------------------------------------------------
6.0 Installing the Firmware
The method used to install new firmware will depend on the release level of
firmware which is currently installed on your server. The release level can be
determined by the prefix of the new firmware's filename.Example: VMxxx_yyy_zzz
Where xxx = release level
* If the release level will stay the same (Example: Level VM910_040_040 is
currently installed and you are attempting to install level VM910_041_040) this
is considered an update.
* If the release level will change (Example: Level VM900_040_040 is currently
installed and you are attempting to install level VM910_050_050) this is
considered an upgrade. Instructions for installing firmware updates and
upgrades can be found at
https://www.ibm.com/support/knowledgecenter/9040-MR9/p9eh6/p9eh6_updates_sys.htm
IBM i Systems:
For information concerning IBM i Systems, go to the following URL to access
Fix Central:
http://www-933.ibm.com/support/fixcentral/
Choose "Select product", under Product Group specify "System i", under
Product specify "IBM i", then Continue and specify the desired firmware PTF
accordingly.
7.0 Firmware History
The complete Firmware Fix History (including HIPER descriptions) for this
Release level can be reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VM-Firmware-Hist.html