VM940
For Impact, Severity and other Firmware definitions, Please
refer to the below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs
The
complete Firmware Fix History for
this
Release Level can be
reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VM-Firmware-Hist.html
|
VM940_061_027 / FW940.20
09/24/20 |
Impact: Data
Severity: HIPER
New features and functions
- DEFERRED: Host
firmware support for anti-rollback protection. This feature
implements firmware anti-rollback protection as described in NIST SP
800-147B "BIOS Protection Guidelines for Servers". Firmware is
signed with a "secure version". Support added for a new menu in
ASMI called "Host firmware security policy" to update this secure
version level at the processor hardware. Using this menu, the
system administrator can enable the "Host firmware secure version
lock-in" policy, which will cause the host firmware to update the
"minimum secure version" to match the currently running firmware. Use
the "Firmware Update Policy" menu in ASMI to show the current "minimum
secure version" in the processor hardware along with the "Minimum code
level supported" information. The secure boot verification process will
block installing any firmware secure version that is less than the
"minimum secure version" maintained in the processor hardware.
Prior to enabling the "lock-in" policy, it is recommended to accept the
current firmware level.
WARNING: Once lock-in is enabled and the system is booted, the "minimum
secure version" is updated and there is no way to roll it back to allow
installing firmware releases with a lesser secure version.
System firmware changes that
affect all systems
- HIPER/Pervasive:
A problem was fixed for certain SR-IOV adapters for a condition that
may result from frequent resets of adapter Virtual Functions (VFs), or
transmission stalls and could lead to potential undetected data
corruption.
The following additional fixes are also included:
1) The VNIC backing device goes to a powered off state during a VNIC
failover or Live Partition Mobility (LPM) migration. This failure
is intermittent and very infrequent.
2) Adapter time-outs with SRC B400FF01 or B400FF02 logged.
3) Adapter time-outs related to adapter commands becoming blocked with
SRC B400FF01 or B400FF02 logged
4) VF function resets occasionally not completing quickly enough
resulting in SRC B400FF02 logged.
This fix updates the adapter firmware to 11.4.415.33 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed
for the REST/Redfish interface to change the success return code for
object creation from "200" to "201". The "200" status code means that
the request was received and understood and is being processed. A "201"
status code indicates that a request was successful and, as a result, a
resource has been created. The Redfish Ruby Client,
"redfish_client" may fail a transaction if a "200" status code is
returned when "201" is expected.
- A problem was fixed to allow quicker recovery of PCIe links
for the #EMXO PCIe expansion drawer for a run-time fault with B7006A22
logged. The time for recovery attempts can exceed six minutes on
rare occasions which may cause I/O adapter failures and failed
nodes. With the fix, the PCIe links will recover or fail faster
(in the order of seconds) so that redundancy in a cluster configuration
can be used with failure detection and failover processing by other
hosts, if available, in the case where the PCIe links fail to recover.
- A problem was fixed for a concurrent maintenance "Repair
and Verify" (R&V) operation for a #EMX0 fanout module that fails
with an "Unable to isolate the resource" error message. This should
occur only infrequently for cases where a physical hardware failure has
occurred which prevents access to slot power controls. This problem can
be worked around by bringing up the "PCIe Hardware Topology" screen
from either ASMI or the HMC after the hardware failure but before the
concurrent repair is attempted. This will avoid the problem with the
PCIe slot isolation. These steps can also be used to recover from the
error to allow the R&V repair to be attempted again.
- A problem was fixed for a rare system hang that can occur
when a page of memory is being migrated. Page migration (memory
relocation) can occur for a variety of reasons, including predictive
memory failure, DLPAR of memory, and normal operations related to
managing the page pool resources.
- A problem was fixed for utilization statistics for commands
such as HMC lslparutil and third-party lpar2rrd that do not accurately
represent CPU utilization. The values are incorrect every time
for a partition that is migrated with Live Partition Mobility (LPM).
Power Enterprise Pools 2.0 is not affected by this problem. If
this problem has occurred, here are three possible recovery options:
1) Re-IPL the target system of the migration.
2) Or delete and recreate the partition on the target system.
3) Or perform an inactive migration of the partition. The cycle
values get zeroed in this case.
- A problem was fixed for running PCM on a system with SR-IOV
adapters in shared mode that results in an "Incomplete" system state
with certain hypervisor tasks deadlocked. This problem is rare and is
triggered when using SR-IOV adapters in shared mode and gathering
performance statistics with PCM (Performance Collection and Monitoring)
and also having a low-level error on an adapter. The only way to
recover from this condition is to re-IPL the system.
- A problem was fixed for an enhanced PCIe expansion drawer
FPGA reset causing EEH events from the fanout module or cable cards
that disrupt the PCIe lanes for the PCIe adapters. This problem affects
systems with the PCIe expansion drawer enhanced fanout module (#EMXH)
and the enhanced cable card ( #EJ20).
The error is associated with the following SRCs being logged:
B7006A8D with PRC 37414123 (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::LocalFpgaHwReset)
B7006A8E with PRC 3741412A (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::RemoteFpgaHwReset)
If the EEH errors occur, the OS device drivers automatically recover
but with a reset of affected PCIe adapters that would cause a brief
interruption in the I/O communications.
- A problem was fixed for the FRU callout lists for SRCs
B7006A2A and B7006A2B possibly not including the FRU containing the
PCIe switch as the second FRU in the callout list. The card/drive
in the slot is the first callout and the FRU containing the PCIe switch
should be the second FRU in the callout list. This problem occurs
when the PCIe slot is on a different planar that the PCIe switch
backing the slot. This impacts the NVMe backplanes (P2 with slots
C1-C4) hosting the PCIe backed SSD NVMe U.2 modules that have feature
codes #EC5J and #EC5K. As a workaround for B7006A2A and B7006A2B
errors where the callout FRU list is processed and the problem is not
resolved, consider replacing the backplane (which includes the PCIe
switch) if this was omitted in the FRU callout list.
- A problem was fixed for a PCIe3 expansion drawer cable that
has hidden error logs for a single lane failure. This happens
whenever a single lane error occurs. Subsequent lane failures are not
hidden and have visible error logs. Without the fix, the hidden or
informational logs would need to be examined to gather more information
for the failing hardware.
- A problem was fixed for an infrequent issue after a Live
Partition Mobility (LPM) operation from a POWER9 system to a POWER8 or
POWER7 system. The issue may cause unexpected OS behavior, which may
include loss of interrupts, device time-outs, or delays in
dispatching. Rebooting the affected target partition will resolve
the problem.
- A problem was fixed for a partition crash or hang following
a partition activation or a DLPAR add of a virtual processor. For
partition activation, this issue is only possible for a system with a
single partition owning all resources. For DLPAR add, the issue is
extremely rare.
- A problem was fixed for a DLPAR remove of memory from a
partition that fails if the partition contains 65535 or more LMBs. With
16MB LMBs, this error threshold is 1 TB of memory. With 256 MB LMBs, it
is 16 TB of memory. A reboot of the partition after the DLPAR will
remove the memory from the partition.
- A problem was fixed for an IPL failure with SRC BA180020
logged for an initialization failure on a PCIe adapter in a PCIe3
expansion drawer. The PCIe adapters that are intermittently failing on
the PCIe probe are the PCIe2 4-port Fibre Channel Adapter with feature
code #5729 and the PCIe2 4-port 1 Gb Ethernet Adapter with feature code
#5899. The failure can only occur on an IPL or re-IPL and it is
very infrequent. The system can be recovered with a re-IPL.
- A problem was fixed for a partition configured with a large
number (approximately 64) of Virtual Persistent Memory (PMEM) LUNs
hanging during the partition activation with a CA00E134 checkpoint SRC
posted. Partitions configured with approximately 64 PMEM LUNs
will likely hang and the greater the number of LUNs, the greater the
possibility of the hang. The circumvention to this problem is to
reduce the number of PMEM LUNs to 64 or less in order to boot
successfully. The PMEM LUNs are also known as persistent memory volumes
and can be managed using the HMC. For more information on this topic,
refer to https://www.ibm.com/support/knowledgecenter/POWER9/p9efd/p9efd_lpar_pmem_settings.htm.
- A problem was fixed for non-optimal On-Chip Controller
(OCC) processor frequency adjustments when system power limits or user
power caps are exceeded. When a workload causes power limits or caps to
be exceeded, there can be large frequency swings for the processors and
a processor chip can get stuck at minimum frequency. With the fix, the
OCC now waits for new power readings when changing the processor
frequency and uses a master power capping frequency to keep all
processors at the same frequency. As a workaround for this problem, do
not set a power cap or run a workload that would exceed the system
power limit.
- A problem was fixed for PCIe resources under a deconfigured
PCIe Host Bridge (PHB) being shown on the OS host as available
resources when they should be shown as deconfigured. While this fix can
be applied concurrently, a re-IPL of the system is needed to correct
the state of the PCIe resources if a PHB had already been deconfigured.
- A problem was fixed for incorrect run-time deconfiguration
of a processor core with SRC B700F10B. This problem can be circumvented
by a reconfiguration of the processor core but this should only be done
with the guidance of IBM Support to ensure the core is good.
- A problem was fixed for certain SR-IOV adapter errors where
a B400F011 is reported instead of a more descriptive B400FF02 or
B400FF04. The LPA dump still happens which can be used to isolate to
the issue. The SR-IOV adapters affected have the following Feature
Codes and CCINs: #EC2R/#EC2S with CCIN 58FA; #EC2T/#EC2U with CCIN
58FB; #EC3L/#EC3M with CCIN 2CE; and #EC66/#EC67 with CCIN 2CF3.
- A problem was fixed for mixing modes on the ports of SR-IOV
adapters that causes SRC B200A161, B200F011, B2009014 and B400F104 to
be logged on boot of the failed adapter. This error happens when one
port of the adapter is changed to option 1 with a second port set at
either option 0 or option 2. The error can be cleared by taking
the adapter out of SR-IOV shared mode. The SR-IOV adapters affected
have the following Feature Codes and CCINs: #EC2R/#EC2S with CCIN 58FA;
#EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN 2CE; and #EC66/#EC67
with CCIN 2CF3.
- A problem was fixed for certain SR-IOV adapters with the
following issues:
1) The VNIC backing device goes to a powered off state during a VNIC
failover or Live Partition Mobility (LPM) migration. This failure is
intermittent and very infrequent.
2) Adapter time-outs with SRC B400FF01 or B400FF02 logged.
3) Adapter time-outs related to adapter commands becoming blocked with
SRC B400FF01 or B400FF02 logged.
4)VF function resets occasionally not completing quickly enough
resulting in SRC B400FF02 logged.
This fix updates the adapter firmware to 11.4.415.33 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed for Novalink-created virtual ethernet
and vNIC adapters having incorrect SR-IOV Hybrid Network Virtualization
(HNV) values. The AIX and other OS hosts may be unable to use the
adapters. This happens for all virtual ethernet and vNIC adapters
created by Novalink in the FW940 releases up to the FW940.10 service
pack. The fix will correct the settings for new Novalink created
virtual adapters, but any pre-existing virtual adapters created by
Novalink in FW940 must be deleted and recreated.
- A problem was fixed for partitions configured to run
as AIX, VIOS, or Linux partitions that also own specific Fibre Channel
(FC) I/O adapters (see below) are subject to a partition crash
during boot if the partition does not already have a boot list. During
the initial boot of a new partition (containing 577F, 578E, 578F or
579B adapters), the boot might fail with one of the following reference
codes: BA210001, BA218001, BA210003, or BA218003. This most often
occurs on deployments of new partitions that are booting for the first
time for either a network install or booting to the Open Firmware
prompt or SMS menus for the first time. The issue requires that the
partition owns one or more of the following FC adapters and that these
adapters are running at microcode firmware levels older than version
11.4.415.5:
- Feature codes #EN1C,/#EN1D and #EL5X/#EL5W with CCIN 578E
- Feature codes #EN1A/# EN1B and #EL5U/#EL5V with CCIN 578F
- Feature codes #EN0A,/#EN0B and #EL5B/#EL43 with CCIN 577F
The frequency of the problem is somewhat rare because it requires
the following:
- Partition does not already have a default boot list
- Partition configured with one of the FC adapters listed above
- The FC adapters must be running a version of microcode with
unsigned/unsecure adapter microcode
The following work around was created for systems having this issue: https://www.ibm.com/support/pages/node/1367103.
With the fix, the FC adapters are given a temporary substitute
for the FCode on the adapter but not the entire microcode image. The
adapter microcode is not updated. This workaround is done so the system
can boot from the adapter until the adapter can be updated by the
customer with the latest available microcode from IBM Fix Central. In
the meantime, the FCode substitution is made from the 12.4.257.15 level
of the microcode.
- A problem was fixed for mixing memory DIMMs with different
timings (different vendors) under the same memory controller that fail
with an SRC BC20E504 error and DIMMs deconfigured. This is an
"MCBIST_BRODCAST_OUT_OF_SYNC" error. The loss of memory DIMMs can
result in an IPL failure. This problem can happen if the memory
DIMMs have a certain level of timing differences. If the timings are
not compatible, the failure will occur on the IPL during the memory
training. To circumvent this problem, each memory controller should
have only memory DIMMs from the same vendor plugged.
- A problem was fixed for the SR-IOV logical port of an
I/O adapter logging a B400FF02 error because of a time-out waiting on a
response from the firmware. This rare error requires a very heavily
loaded system. For this error, word 8 of the error log is 80090027. No
user intervention is needed for this error as the logical port recovers
and continues with normal operations.
- A problem was
fixed for a security vulnerability for the Self Boot Engine
(SBE). The SBE can be compromised from the service processor to
allow injection of malicious code. An attacker that gains root access
to the service processor could compromise the integrity of the host
firmware and bypass the host firmware signature verification process.
This compromised state can not be detected through TPM
attestation. This is Common Vulnerabilities and Exposures issue
number CVE-2021-20487.
System firmware changes that affect certain systems
- On systems with AIX and Linux
partitions, a problem was fixed for AIX and Linux partitions that crash
or hang when reporting any of the following Partition Firmware RTAS
ASSERT rare conditions:
1) SRC BA33xxxx errors - Memory allocation and management errors.
2) SRC BA29xxxx errors - Partition Firmware internal stack errors.
3) SRC BA00E8xx errors - Partition Firmware initialization errors
during concurrent firmware update or Live Partition Mobility (LPM)
operations.
This problem should be very rare. If the problem does occur, a
partition reboot is needed to recover from the error.
|
VM940_050_027 / FW940.10
05/22/20 |
Impact: Availability
Severity: SPE
New features and functions
- Enable periodic logging of internal component operational
data for the PCIe3 expansion drawer paths. The logging of this data
does not
impact the normal use of the system.
- Support added for SR-IOV Hybrid Network Virtualization
(HNV) in a production environment (no longer a Technology Preview) for
AIX and IBM i. This capability allows an AIX or IBM i
partition to take advantage of the efficiency and performance benefits
of SR-IOV logical ports and participate in mobility operations such as
active and inactive Live Partition Mobility (LPM) and Simplified Remote
Restart (SRR). HNV is enabled by selecting a new Migratable
option when an SR-IOV logical port is configured. The Migratable option
is used to create a backup virtual device. The backup virtual
device can be either a Virtual Ethernet adapter or a virtual Network
Interface Controller (vNIC) adapter. In addition to this firmware HNV
support in a production environment requires HMC 9.1.941.0 or later,
AIX Version 7.2 with the 7200-04 Technology Level and Service Pack
7200-04-02-2015 or AIX Version 7.1 with the 7100-05 Technology Level
and Service Pack 7100-05-06-2015, IBM i 7.3 TR8 or IBM i 7.4 TR2, and
VIOS 3.1.1.20.
System firmware changes that
affect all systems
- DEFERRED: A
problem was fixed for a processor core failure with SRCs B150BA3C and
BC8A090F logged that deconfigures the entire processor for the current
IPL. A re-IPL of the system will recover the lost processor with
only the bad core guarded.
- A problem was fixed for Performance Monitor Unit (PMU)
events that had the incorrect Alink address (Xlink data given instead)
that could be seen in 24x7 performance reports. The Alink event
data is a recent addition for FW940 and would not have been seen at the
earlier firmware levels.
- A problem was fixed for an SR-IOV adapter hang with
B400FF02/B400FF04 errors logged during firmware update or error
recovery. The adapter may recover after the error log and dump,
but it is possible the adapter VF will remain disabled until the
partition using it is rebooted. This affects the SR-IOV adapters
with the following feature codes and CCINs: #EC2R/EC2S with
CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN
2CEC; and #EC66/EC67 with CCIN 2CF3.
- A problem was fixed for extraneous B400FF01 and B400FF02
SRCs logged when moving cables on SR-IOV adapters. This is an
infrequent error that can occur if the HMC performance monitor is
running at the same time the cables are moved. These SRCs can be
ignored when accompanied by cable movement.
- A problem was fixed for certain SR-IOV adapters that can
have B400FF02 SRCs logged with LPA dumps during Live Partition Mobility
(LPM) migrations or vNIC failovers. The adapters can have issues
with a deadlock on error starts after many resets of the VF and errors
in managing memory pages. In most cases, the operations should
recover and complete. This fix updates the adapter firmware to
1X.25.6100 for the following Feature Codes and CCINs: #EC2R/EC2S
with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN
2CE; and #EC66/EC67 with CCIN 2CF3.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed where SR-IOV adapter VFs occasionally
failed to provision successfully on the low-speed ports (1 Gbps) with
SRC B400FF04 logged, or SR-IOV adapter VFs occasionally failed to
provision successfully with SRC B400FF04 logged when the RoCE option is
enabled.
This affects the adapters with low speed ports (1 Gbps) with the
following Feature Codes and CCINs: #EN0H/EN0J with CCIN
2B93, #EN0M/EN0N with CCIN 2CC0, and #EN0K/EN0L with CCIN
2CC1. And it affects the adapters with the ROCE option
enabled with the following feature codes and CCINs:
#EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB;
#EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3.
- A problem was fixed for an expired trial or elastic
Capacity on Demand ( CoD) memory not warning of the use of unlicensed
memory if the memory is not returned. This lack of warning can occur if
the trial memory has been allocated as Virtual Persistent Memory
(vPMEM).
- A problem was fixed for a B7006A96 fanout module FPGA
corruption error that can occur in unsupported PCIe3 expansion
drawer(#EMX0) configurations that mix an enhanced PCIe3 fanout module
(#EMXH) in the same drawer with legacy PCIe3 fanout modules (#EMXF,
#EMXG, #ELMF, or #ELMG). This causes the FPGA on the enhanced
#EMXH to be updated with the legacy firmware and it becomes a
non-working and unusable fanout module. With the fix, the
unsupported #EMX0 configurations are detected and handled gracefully
without harm to the FPGA on the enhanced fanout modules.
- A problem was fixed for possible dispatching delays for
partitions running in POWER8, POWER9_base or POWER9 processor
compatibility mode.
- A problem was fixed for system memory not returned after
create and delete of partitions, resulting in slightly less memory
available after configuration changes in the systems. With the
fix, an IPL of the system will recover any of the memory that was
orphaned by the issue.
- A problem was fixed for failover support for the Mover
Service Partition (MSP) where a failover to the MSP partner during an
LPM could cause the migration to abort. This vulnerability
is only for a very specific window in the migration process. The
recovery is to restart the migration operation.
- A rare problem was fixed for a checkstop during an IPL that
fails to isolate and guard the problem core. An SRC is logged
with B1xxE5xx and an extended hex word 8 xxxxDD90. With the fix,
the failing hardware is guarded and a node is possibly deconfigured to
allow the subsequent IPLs of the system to be successful.
- A problem was fixed for a hypervisor error during system
shutdown where a B7000602 SRC is logged and the system may also briefly
go "Incomplete" on the HMC but the shutdown is successful. The
system will power back on with no problems so the SRC can be ignored if
it occurred during a shutdown.
- A problem was fixed for certain large I/O adapter
configurations having the PCI link information truncated on the PCI-E
topology display shown with ASMI and the HMC. Because of the
truncation, individual adapters may be missing on the PCI-E topology
screens.
- A problem was fixed for certain NVRAM corruptions causing a
system crash with a bad pointer reference instead of expected Terminate
Immediate (TI) with B7005960 logged.
- A problem was fixed for certain SR-IOV adapters that do not
support the "Disable Logical Port" option from the HMC but the HMC was
allowing the user to select this, causing incorrect operation.
The invalid state of the logical port causes an "Enable Logical Port"
to fail in a subsequent operation. With the fix, the HMC provides
the message that the "Disable Logical Port" is not supported for the
adapter. This affects the adapters with the following
Feature Codes and CCINs: #EN15/EN16 with CCIN 2CE3, #EN17/EN18
with CCIN 2CE4, #EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with CCIN 2CC0,
and #EN0K/EN0L with CCIN 2CC1.
- A problem was fixed for the service processor ASMI "Factory
Reset" option to disable the IPMI service as part of the factory
reset. Without the fix, the IPMI operation state will be
unchanged by the factory reset.
- A problem was fixed to remove unneeded resets of a VF for
SR-IOV adapters, providing for improved performance of the startup or
recovery time of the VF. This performance difference may be
noticed during a Live Partition Mobility migration of a partition or
during vNIC (Virtual Network Interface Controller) failovers where many
resets of VFs are occurring.
- A problem was fixed for SR-IOV adapters having an SRC
B400FF04 logged when a VF is reset. This is an infrequent issue
and can occur for a Live Partition Mobility migration of a partition or
during vNIC (Virtual Network Interface Controller) failovers where many
resets of VFs are occurring. This error is recovered
automatically with no impact on the system.
- A problem was fixed for initial configuration of
SR-IOV adapter VFs with certain configuration settings for the
following Feature Codes and CCINs: #EC2R/EC2S with CCIN
58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CE;
and #EC66/EC67 with CCIN 2CF3.
These VFs may then fail following an adapter restart, with other VFs
functioning normally. The error causes the VF to fail with an SRC
B400FF04 logged. With the
fix, VFs are configured correctly when created.
Because the error condition may pre-exist in an incorrectly configured
logical port, a concurrent update of this fix may trigger a logical
port failure when the VF logical port is restarted during the firmware
update. Existing VFs with the failure condition can be recovered
by dynamically removing/adding the failed port and are automatically
recovered during a system restart.
- A problem was fixed for TPM hardware failures not causing
SRCs to logged with a call out if the system is configured in ASMI to
not require TPM for the IPL. If this error occurs, the user would
not find out about it until they needed to run with TPM on the
IPL. With the fix, the error logs and notifications will occur
regardless of how the TPM is configured.
System firmware changes that affect certain systems
- For systems with deconfigured cores
and using the default performance and power setting of "Dynamic
Performance Mode" or "Maximum Performance Mode", a rare problem was
fixed for an incorrect voltage/frequency setting for the processors
during heavy workloads with high ambient temperature. This error
could impact power usage, expected performance, or system availability
if a processor fault occurs. This problem can be avoided by using
ASMI "Power and Performance Mode Setup" to disable "All modes"
when there are cores deconfigured in the system.
|
VM940_041_027 / FW940.02
02/18/20 |
Impact: Function
Severity: HIPER
System firmware changes that affect all systems
- A problem was fixed
for an HMC "Incomplete" state for a system after
the HMC user password is changed with ASMI on the service
processor. This problem can occur if the HMC password is changed
on the service processor but not also on the HMC, and a reset of the
service processor happens. With the fix, the HMC will get the
needed "failed authentication" error so that the user knows to update
the old password on the HMC.
System firmware changes that affect certain systems
- HIPER/Pervasive:
For systems
using PowerVM NovaLink to manage partitions, a problem was fixed for
the hypervisor rejecting setting the system to be NovaLink
managed.
The following error message is given: "FATAL pvm_apd[]:
Hypervisor
encountered an error creating the ibmvmc device. Error number 5."
This
always happens in FW940.00 and FW940.01 which prevents a system from
transitioning to be NovaLink managed at these firmware levels. If
you
were successfully running as NovaLink managed already on FW930 and
upgraded to FW940, you would not experience this issue.
For more information on PowerVM Novalink, refer to the IBM Knowledge
Center article: https://www.ibm.com/support/knowledgecenter/POWER9/p9eig/p9eig_kickoff.htm.
|
VM940_034_027 / FW940.01
01/09/20 |
Impact: Security
Severity: SPE
New features and functions
- Support was added for
improved
security for the service processor password policy. For the
service
processor, the "admin", "hmc" and "general" password must be set
on
first use for newly manufactured systems and after a factory reset of
the system. The IPMI interface has been changed to be disabled by
default in these scenarios. The REST/Redfish interface will
return an
error saying the user account is expired. This policy change
helps to
enforce the service processor is not left in a state with a well known
password. The user can change from an expired default password to
a
new password using the Advanced System Management Interface (ASMI).
- Support was added for real-time
data
capture for PCIe3 expansion drawer (#EMX0) cable card connection data
via resource dump selector on the HMC or in ASMI on the service
processor. Using the resource selector string of "xmfr
-dumpccdata"
will non-disruptively generate an RSCDUMP type of dump file that has
the current cable card data, including data from cables and the
retimers.
System firmware changes that affect all systems
- A problem was fixed for
an
intermittent IPMI core dump on the service processor. This occurs
only
rarely when multiple IPMI sessions are starting and cleaning up at the
same time. A new IPMI session can fail initialization when one of
its
session objects is cleaned up. The circumvention is to retry the
IPMI
command that failed.
- A problem was fixed for system hangs
or
incomplete states displayed by HMC(s) with SRC B182951C logged.
The
hang can occur during operations that require a memory relocation for
any partition such as Dynamic Platform Optimization (DPO), memory
mirroring defragmentation, or memory guarding that happens as part of
memory error recovery during normal operations of the system.
- A problem was fixed for possible
unexpected interrupt behavior for partitions running in POWER9
processor compatibility mode. This issue can occur during the
boot of
a partition running in POWER9 processor compatibility mode with an OS
level that supports the External Interrupt Virtualization Engine (XIVE)
exploitation mode. For more information on compatibility modes,
see
the following two articles in the IBM Knowledge Center:
1) Processor compatibility mode overview: https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcm.htm
2) Processor compatibility mode definitions: https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcmdefs.htm
- A problem was fixed for an
intermittent IPL failure with SRC B181E540 logged with fault signature
" ex(n2p1c0) (L2FIR[13]) NCU Powerbus data timeout". No FRU is
called
out. The error may be ignored and the re-IPL is successful.
The error
occurs very infrequently. This is the second iteration of the fix
that
has been released. Expedient routing of the Powerbus interrupts
did
not occur in all cases in the prior fix, so the timeout problem was
still occurring.
|
VM940_027_027 / FW940.00
11/25/19 |
Impact:
New
Severity: New
GA Level with key features
included listed below
- All features and fixes from the FW930.11. service pack (and
below) are included in this release. At the time of the FW940.00
release, the FW930.11 is a future FW930 service pack scheduled
for the fourth quarter of 2019.
New Features and Functions
- User Mode NX Accelerator Enablement for PowerVM. This
enables the access of NX accelerators such as the gzip engine through
user mode interfaces. The IBM Virtual HMC (vHMC) 9.1.940 provides
a user interface to this feature. The LPAR must be running in
POWER9 compatibility mode to use this feature. For more
information on compatibility modes, see the following two articles in
the IBM Knowledge Center:
1) Processor compatibility mode overview: https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcm.htm
2) Processor compatibility mode definitions: https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcmdefs.htm
- Support for user mode enablement of the External Interrupt
Virtualization Engine (XIVE). This user mode enables the
management of interrupts to move from the hypervisor to the operating
system for improved efficiency. Operating systems may also have
to be updated to enable this support. The LPAR must be running in
POWER9 compatibility mode to use this feature. For more
information on compatibility modes, see the following two articles in
the IBM Knowledge Center:
1) Processor compatibility mode overview: https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcm.htm
2) Processor compatibility mode definitions: https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcmdefs.htm
- Extended support for PowerVM Firmware Secure Boot.
This feature restricts access to the Open Firmware prompt and validates
all adapter boot driver code. Boot adapters, or adapters which may be
used as boot adapters in the future, must be updated to the latest
microcode from IBM Fix Central. The latest microcode will ensure
the adapters support the Firmware Secure Boot feature of Power Systems.
This requirement applies when updating system firmware from a level
prior to FW940 to levels FW940 and later. The latest adapter
microcode levels include signed boot driver code. If a
boot-capable PCI adapter is not installed with the latest level of
adapter microcode, the partition which owns the adapter will boot, but
error logs with SRCs BA5400A5 or BA5400A6 will be posted. Once the
adapter(s) are updated, the error logs will no longer be posted.
- Linux OS support was added for PowerVM LPARs for the PCIe4
2x100GbE ConnectX-5 RoCE adapter with feature codes of #EC66/EC67 and
CCIN 2CF3. Linux versions RHEL 7.5 and SLES 12.3 are
supported.
System firmware changes that
affect all systems
- A problem was fixed for incorrect call outs for PowerVM
hypervisor
terminations with SRC B7000103 logged. With the fix, the call
outs are
changed from SVCDOCS, FSPSP04, and FSPSP06 to FSPSP16. When this
type
of termination occurs, IBM support requires the dumps be collected to
determine the cause of failure.
- A problem was fixed for an IPL failure with the following
possible SRCs logged: 11007611, 110076x1, 1100D00C, and
110015xx. The
service processor may reset/reload for this intermittent error and end
up in the termination state.
|