VH950
For Impact, Severity and other Firmware definitions, Please
refer to the below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs
The
complete Firmware Fix History for
this
Release Level can be
reviewed at the following url:
https://public.dhe.ibm.com/software/server/firmware/VH-Firmware-Hist.html
|
VH950_105_045 / FW950.50
07/29/22 |
Impact: Availability
Severity: HIPER
System firmware changes that
affect all systems
- HIPER/Non-Pervasive:
A problem was fixed for certain SR-IOV adapters in shared mode when the
physical port is configured for Virtual Ethernet Port Aggregator
(VEPA). Packets may not be forwarded after a firmware update, or after
certain error scenarios which require an adapter reset. Users
configuring or using VEPA mode should install this update.
This fix pertains to adapters with the following Feature Codes and
CCINs: #EC2R/#EC2S with CCIN 58FA; #EC2T/#EC2U with CCIN 58FB;
#EC3L/#EC3M with CCIN 2CEC; and #EC66/#EC67 with CCIN 2CF3.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
- A problem was fixed for a rare service processor core dump
for NetsCommonMsgServer with SRC B1818611 logged that can occur when
doing an AC power-on of the system. This error does not have a system
impact beyond the logging of the error as an auto-recovery happens.
- A problem was fixed for an apparent hang in a partition
shutdown where the HMC is stuck in a status of "shutting down" for the
partition. This infrequent error is caused by a timing window during
the system or partition power down where the HMC checks too soon and
does not see the partition in the "Powered Off" state. However, the
power off of the partition does complete even though the HMC does not
acknowledge it. This error can be recovered by rebuilding the HMC
representation of the managed system by following the below steps:
1) In the navigation area on the HMC, select Systems Management >
Servers.
2) In the contents pane, select the required managed system.
3) Select Tasks > Operations > Rebuild.
4) Select Yes to refresh the internal representation of the managed
system.
- A problem was fixed for a hypervisor task failure with SRC
B7000602 logged when running debug macro "sbdumptrace -sbmgr -detail 2"
to capture diagnostic data. The secure boot trace buffer is not aligned
on a 16-byte boundary in memory which triggers the failure. With the
fix, the hypervisor buffer dump utility is changed to recognize 8-byte
aligned end of buffer boundaries.
- A problem was fixed for Predictive Error (PE) SRCs B7006A72
and B7006A74 being logged too frequently. These SRCs for PCIe
correctable error events called for a repair action but the threshold
for the events was too low for a recoverable error that does not impact
the system. The threshold for triggering the PE SRCs has been
increased.
- A problem was fixed for a system crash with SRC B7000103
that can occur when adding or removing FRUs from a PCIe3 expansion
drawer (Feature code #EXM0). This error is caused by a very rare race
scenario when processing multiple power alerts from the expansion
drawer at the same time.
- A problem was fixed for an HMC incomplete state for the
managed system after a concurrent firmware update. This is an
infrequent error caused by an HMC query race condition while the
concurrent update is rebooting tasks in the hypervisor. A system re-IPL
is needed to recover from the error.
- A problem was fixed for an On-Chip Controller (OCC) and a
Core Management Engine ( CME) boot failure during the IPL with SRC
BC8A090F and a RC_STOP_GPE_INIT_TIMEOUT error logged. This is an
intermittent IPL failure. The system can be recovered by retrying the
IPL. This fix reduces the frequency of the error but it may still
rarely occur. If it does occur, the retry of the IPL will be successful
to recover the system.
- A problem was fixed for a failed correctable error recovery
for a DIMM that causes a flood of SRC BC81E580 error logs and also can
prevent dynamic memory deallocation from occurring for a hard memory
error. This is a very rare problem caused by an unexpected number
of correctable error symbols for the DIMM in the per-symbol counter
registers.
|
VH950_099_045 / FW950.40
05/06/22 |
Impact: Security
Severity: HIPER
System firmware changes that
affect all systems
- HIPER/Non-Pervasive:
A problem was fixed for a flaw in OpenSSL TLS which can lead to an
attacker being able to compute the pre-master secret in connections
that have used a Diffie-Hellman (DH) based ciphersuite. In such a case
this would result in the attacker being able to eavesdrop on all
encrypted communications sent over that TLS connection. OpenSSL
supports encrypted communications via the Transport Layer Security
(TLS) and Secure Sockets Layer (SSL) protocols. With the fix, the
service processor Lighttpd web server is changed to only use a strict
cipher list that precludes the use of the vulnerable
ciphersuites. The Common Vulnerability and Exposure number for
this problem is CVE-2020-1968.
- A problem was fixed for a change made to disable Service
Location Protocol (SLP) by default for a newly shipped system so that
the SLP is disabled by a reset to manufacturing defaults on all systems
and to also disable SLP on all systems when this fix is applied by the
firmware update. The SLP configuration change has been made to reduce
memory usage on the service processor by disabling a service that is
not needed for normal system operations. In the case where SLP
does need to be enabled, the SLP setting can be changed using ASMI with
the options "ASMI -> System Configuration -> Security ->
External Services Management" to enable or disable the service.
Without this fix, resetting to manufacturing defaults from ASMI does
not change the SLP setting that is currently active.
- A problem was fixed for ASMI TTY menus allowing an
unsupported change in hypervisor mode to OPAL. This causes an IPL
failure with BB821410 logged if OPAL is selected. The hypervisor
mode is not user-selectable in POWER9 and POWER10. Instead, the
hypervisor mode is determined by the MTM of the system. With this fix,
the "Firmware Configuration" option in ASMI TTY menus is removed so
that it matches the options given by the ASMI GUI menus.
- A problem was fixed for correct ASMI passwords being
rejected when accessing ASMI using an ASCII terminal with a serial
connection to the server. This problem always occurs for systems
at firmware level FW950.10 and later.
- A problem was fixed for a flaw in OpenSSL certificate
parsing that could result in an infinite loop in the hypervisor,
causing a hang in a Live Partition Mobility (LPM) target partition. The
trigger for this failure is an LPM migration of a partition with a
corrupted vTPM certificate. This is expected to be a rare
problem. The Common Vulnerability and Exposure number for this
problem is CVE-2022-0778.
- A problem was fixed for a flaw in OpenSSL certificate
parsing that could result in an infinite loop in the hypervisor,
causing a hang in a Live Partition Mobility (LPM) target
partition. The trigger for this failure is an LPM migration of a
partition with a corrupted physical trusted platform module (pTPM)
certificate.
This is expected to be a rare problem. The Common Vulnerability
and Exposure number for this problem is CVE-2022-0778.
- A problem was fixed for a partition with an SR-IOV logical
port (VF) having a delay in the start of the partition. If the
partition boot device is an SR-IOV logical port network device, this
issue may result in the partition failing in boot with SRCs BA180010
and BA155102 logged and then stuck on progress code SRC 2E49 for an AIX
partition. This problem is infrequent because it requires
multiple error conditions at the same time on the SR-IOV adapter.
To trigger this problem, multiple SR-IOV logical ports for the same
adapter must encounter EEH conditions at roughly the same time such
that a new logical port EEH condition is occurring while a previous EEH
condition's handling is almost complete but not notified to the
hypervisor yet. To recover from this problem, reboot the
partition.
- A problem was fixed for errors that can occur if doing a
Live Partition Mobility (LPM) migration and a Dynamic Platform
Optimizer (DPO) operation at the same time. The migration may
abort or the system or partition may crash. This problem requires
running multiple migrations and DPO at the same time. As a
circumvention, do not use DPO while doing LPM migrations.
- A problem was fixed for a secondary fault after a partition
creation error that could result in a Terminate Immediate (TI) of the
system with an SRC B700F103 logged. The failed creation of
partitions can be explicit or implicit which might trigger the
secondary fault. One example of an implicit partition create is
the ghost partition created for a Live Partition Mobility (LPM)
migration. This type of partition can fail to create when there
is insufficient memory available for the hardware page table (HPT) for
the new partition.
- A problem was fixed for a partition reboot recovery for an
adapter in SR-IOV shared mode that rebooted with an SR-IOV port
missing. Prior to the reboot, this adapter had SR-IOV ports that
failed and were removed after multiple adapter faults, This
problem should only occur rarely as it requires a sequence of multiple
faults on an SR-IOV adapter in a short time interval to force the
SR-IOV Virtual Function (VF) into the errant unrecoverable state.
The missing SR-IOV port can be recovered for the partition by doing a
remove and add of the failed adapter with DLPAR, or the system can be
re-IPLed.
- A problem was fixed for an intermittent link training error
for an NVMe drive during the IPL, resulting in "Unknown" I/O on the HMC
and Informational SRC B7006976 error logs. A run time plug of the
drive does not have the error. As a workaround, the drive slot
can be powered off from the HMC and then powered on to recover the
drive. Or if the drive is assigned to an OS and has not link
trained, an OS-directed hot plug can be used to power off the slot and
power it back on. The NVMe drive affected by this problem is
the 800GB NVME U.2 7 mm SSD (Solid State Drive) PCIe4 drive in a
7 mm carrier with Feature Code #EC7Q and CCIN 59B4 for AIX, Linux, and
VIOS.
- The following problems were fixed for certain SR-IOV
adapters:
1) A problem was fixed for certain SR-IOV adapters that occurs during a
VNIC failover where the VNIC backing device has a physical port down
due to an adapter internal error with an SRC B400FF02 logged.
This is an improved version of the fix delivered in earlier service
pack FW950.10 for adapter firmware level 11.4.415.37 and it
significantly reduces the frequency of the error being fixed.
2) A problem was fixed for an adapter issue where traffic doesn’t flow
on a VF when the VF is configured with a PVID set to zero and using OS
VLAN tagging is configured on a physical port where a VF with a PVID
set to the same VLAN ID already exists. This problem occurs whenever
this specific VF configuration is dynamically added to a partition or
is activated as part of a partition activation.
This fix updates the adapter firmware to 11.4.415.43 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
Update instructions: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
- A problem was fixed for multiple incorrect
informational error logs with Thermal Management SRC B1812649 being
logged on the service processor, These error logs are more frequent on
multiple node systems, but can occur on all system models. The error is
triggered by a false time-out and does not reflect a real problem on
the service processor.
System firmware changes that
affect certain systems
- For a system with an AIX or Linux partition, a problem was
fixed a partition start failure for AIX or Linux with SRC BA54504D
logged. This problem occurs if the partition is an MDC default
partition with virtual Trusted Platform Module (vTPM) enabled. As
a circumvention, power off the system and disable vTPM using the HMC
GUI to change the default partition property for Virtualized Trusted
Platform Module (VTPM) to off.
- For a system with vTPM enabled, a problem was fixed
for an intermittent system hang with SRCs 11001510 and B17BE434 logged
and the HMC showing the system in the "Incomplete" state. This problem
is very rare. It may be triggered by different scenarios such as
a partition power off; a processor DLPAR remove operation; or a
Simultaneous Multi-threading (SMT) mode change in a partition.
- For a system that does not have an HMC attached, a problem
was fixed for a system dump 2GB or greater in size failing to off-load
to the OS with an SRC BA280000 logged in the OS and an SRC BA28003B
logged on the service processor. This problem does not affect
systems with an attached HMC since in that case system dumps are
off-loaded to the HMC, not the OS, where there is no 2GB boundary error
for the dump size.
|
VH950_092_045 / FW950.30
12/09/21 |
Impact: Security
Severity: HIPER
System firmware changes that
affect all systems
- HIPER/Non-Pervasive:
A security problem was fixed to prevent an attacker that gains service
access to the FSP service processor from reading and writing PowerVM
system memory using a series of carefully crafted service
procedures. This problem is Common Vulnerability and Exposure
number CVE-2021-38917.
- HIPER/Non-Pervasive:
A problem was fixed for the IBM PowerVM Hypervisor where through a
specific sequence of VM management operations could lead to a violation
of the isolation between peer VMs. The Common Vulnerability and
Exposure number is CVE-2021-38918.
- HIPER/Non-Pervasive:
For systems with IBM i partitions, the PowerVM hypervisor is
vulnerable to a carefully crafted IBM i hypervisor call that can lead
to a system crash This Common Vulnerability and Exposure number
is CVE-2021-38937.
- A problem was fixed for a possible denial of service on the
service processor for ASMI and Redfish users. This problem is
very rare and could be triggered by a large number of invalid log in
attempts to Redfish over a short period of time.
- A problem was fixed for a service processor hang after a
successful system power down with SRC B181460B and SRC B181BA07
logged. This is a very rare problem that results in a fipsdump
and a reset/reload of the service processor that recovers from the
problem.
- A problem was fixed for system fans not increasing in speed
when partitions are booted with PCIe hot adapters that require
additional cooling. This fan speed problem can also occur if
there is a change in the power mode that requires a higher minimum
speed for the fans of the system than is currently active. Fans
running at a slower speed than required for proper system cooling could
lead to over-temperature conditions for the system.
- A problem was fixed for a hypervisor hang and HMC
Incomplete error with SRC B17BE434 logged on a system with a virtual
Network Interface Controller (vNIC) adapters. The failure
is triggered by actions occurring on two different SR-IOV logical ports
for the same adapter in the VIOS that is backing the vNIC that result
in a deadlock condition. This is a rare failure that can occur
during a Live Partition Mobility (LPM) migration for a partition with
vNIC adapters.
- A problem was fixed for a longer boot time for a shared
processor partition on the first boot after the processor chip 0 has
been guarded. The partition boot could stall at SRC C20012FF but
eventually complete. This rare problem is triggered by the loss
of all cores in processor chip 0. On subsequent partition boots
after the slow problem boot, the boot speeds return to normal.
- A problem was fixed to reduce an IPL window where the
resource values for Power Enterprise Pools (PEP) 1.0 pool are pending
prior to a system IPL completing. With the fix, the IPL time for
a system in a PEP 1.0 pool has been decreased such that the partition
min/cur/max values for PEP are available sooner. It is still the
case that the IPL must be completed before the PEP resource values are
correct.
- A problem was fixed for a Live Partition Mobility (LPM)
hang during LPM validation on the target system. This is a rare
system problem triggered during an LPM migration that causes LPM
attempts to fail as well as other functionality such as configuration
changes and partition shutdowns.
To recover from this problem to be able to do LPM and other operations
such as configuration changes and shutting down partitions, the system
must be re-IPLed.
- A problem was fixed for incorrect Power Enterprise
Pools(PEP) 2.0 throttling when the system goes out of compliance.
When the system is IPLed after going out of compliance, the amount of
throttled resources is lower than it should be on the first day after
the IPL. Later on, the IBM Cloud Management Console (CMC)
corrects the throttle value. This problem requires that a PEP 2.0
system has to go out of compliance, so it should happen only
rarely. To recover from this problem, the user can wait for up to
one day after the IPL or have the CMC resend the desired PEP Throttling
resource amount to correct it immediately.
- A problem was fixed for the system powering off after a
hardware discovery IPL. This will happen if a hardware discovery
IPL is initiated while the system is set to "Power off when last
partition powers off". The system will power off when the
Hardware Discovery Information (IOR) partition that does hardware
discovery powers off. As a workaround, one should not use the
"Power off when last partition powers off" setting when doing the
hardware discovery IPL. Alternatively, one can just do a normal IPL
after the system powers off, and then continue as normal.
- A problem was fixed for system NVRAM corruption that can
occur during PowerVM hypervisor shutdown. This is a rare error
caused by a timing issue during the hypervisor shutdown. If
this error occurs, the partition data will not be able to read from the
invalid NVRAM when trying to activate partitions, so the NVRAM must be
cleared and the partition profile data restored from the HMC.
- A problem was fixed for the HMC Repair and Verify (R&V)
procedure failing with "Unable to isolate the resource" during
concurrent maintenance of the #EMX0 Cable Card. This could lead
one to take disruptive action in order to do the repair. This should
occur infrequently and only with cases where a physical hardware
failure has occurred which prevents access to the PCIe reset line
(PERST) but allows access to the slot power controls.
As a workaround, pulling both cables from the Cable Card to the #EMX0
expansion drawer will result in a completely failed state that can be
handled by bringing up the "PCIe Hardware Topology" screen from either
ASMI or the HMC. Then retry the R&V operation to recover the Cable
Card.
- A problem was fixed to prevent a flood of informational
PCIe Host Bridge (PHB) error logs with SRC B7006A74 that cause a wrap
of internal flight recorders and loss of data needed for problem
debug. This flood can be triggered by bad cables or other issues
that cause frequent informational error logs. With the fix,
thresholding has been added for informational PHB correctable errors at
10 in 24 hours before a Predictive Error is logged.
- A problem was fixed for vague and misleading errors caused
by using an invalid logical partition (LP) id for a resource dump
request. With the fix, the invalid LP id is rejected immediately
as a user input error instead of being processed by the main storage
dump to create what appear to be severe errors.
- A problem was fixed for certain SR-IOV adapters that
encountered a rare adapter condition, had some response delays, and
logged an Unrecoverable Error with SRC B400FF02. With the fix,
handling of this rare condition is accomplished without the response
delay and an Informational Error is logged. and the adapter
initialization continues without interruption. This fix pertains
to adapters with the following Feature Codes and CCINs: #EC2R/EC2S with
CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and
#EC66/EC67 with CCIN 2CF3.
Update instructions: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
- A change was made for certain SR-IOV adapters to move up
to the latest level of adapter firmware. No specific adapter
problems were addressed at this new level. This change updates
the adapter firmware to XX.30.1004 for the following Feature Codes and
CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M
with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3.
Update instructions: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
- A problem was fixed for an SR-IOV adapter in shared mode
configured as Virtual Ethernet Port Aggregator (VEPA) where the SR-IOV
adapter goes through EEH error recovery, causing an informational error
with SRC B400FF04 and additional information text that indicates a
command failed. This always happens when an adapter goes through
EEH recovery and a physical port is in VEPA mode. With the fix,
the informational error is not logged.
Update instructions: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
- A problem was fixed for certain SR-IOV adapters where
Virtual Functions (VFs) failed to configure after an immediate restart
of a logical partition (LPAR) or a shutdown/restart of an LPAR.
This problem only happens intermittently but is more likely to occur
for the immediate restart case. A workaround for the problem is
to try another shutdown and restart of the partition or use DLPAR to
remove the failing VF and then use DLPAR to add it back in. This
fix pertains to adapters with the following Feature Codes and CCINs:
#EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with
CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3.
The fix is in the Partition Firmware and is effective immediately after
a firmware update to the fix level.
- A problem was fixed for a failed IPL after a service
processor failover with SRC B181BA85 logged. This is a rare
problem that can occur on a system that does not have service processor
redundancy enabled. This could happen on the genesis IPL of a new
system if a service processor failover is done before enabling
redundancy. This problem can be recovered by enabling service
processor redundancy, resetting the service processors, and then doing
the IPL.
- A problem was fixed for a system hypervisor hang and an
Incomplete state on the HMC after a logical partition (LPAR) is deleted
that has an active virtual session from another LPAR. This
problem happens every time an LPAR is deleted with an active virtual
session. This is a rare problem because virtual sessions from an
HMC (a more typical case) prevent an LPAR deletion until the virtual
session is closed, but virtual sessions originating from another LPAR
do not have the same check.
System firmware changes that
affect certain systems
- For a system with an IBM i partition, a problem was fixed
for memory-mapped I/O and interrupt resources not being cleaned up for
an SR-IOV VF when an IBM i partition is shut down. This is a rare
problem that requires adapters in SR-IOV shared mode to be assigned to
the partition and certain timings of activity on the adapter prior to a
shutdown of the partition. The lost resources are not available
on the next activation of the partition, but in most cases, this should
not result in a loss of function. The lost resources are
recovered on the next re-IPL of the system.
- For systems with IBM i partitions, a problem was fixed for
incorrect Power Enterprise Pools(PEP) 2.0 messages reporting "Out of
Compliance" with regards to IBM i licenses. These messages can be
ignored as there is no compliance issue to address in this case.
- For a system with a Linux partition using an SR-IOV
adapter, a problem was fixed for ping failures and packet loss for an
SR-IOV logical port when a Dynamic DMA Window (DDW) changes from a
bigger DMA window page size (such as 64K) back to the smaller default
window page size (4K). This can happen during an error recovery
that causes a DDW reset back to the default window page size.
- For a system with an IBM i partition. a problem was
fixed for an IBM i partition running in P7 or P8 processor
compatibility mode failing to boot with SRCs BA330002 and B200A101
logged. This problem can be triggered as larger configurations
for processors and memory are added to the partition. A
circumvention for this problem could be to reduce the number of
processors and memory in the partition, or booting in P9 or later
compatibility mode will also allow the partition to boot.
- For a system with an AIX or Linux partition. a
problem was fixed for Platform Error Logs (PELs) that are
truncated to only eight bytes for error logs created by the firmware
and reported to the AIX or Linux OS. These PELs may appear to be
blank or missing on the OS. This rare problem is triggered by
multiple error log events in the firmware occurring close together in
time and each needing to be reported to the OS, causing a truncation in
the reporting of the PEL. As a problem workaround, the full error
logs for the truncated logs are available on the HMC or using ASMI on
the service processor to view them.
|
VH950_087_045 / FW950.20
09/16/21 |
Impact: Data
Severity: HIPER
New Features and Functions
- Support added for a mainstream 800GB NVME U.2 7 mm SSD
(Solid State Drive) PCIe4 drive in a 7 mm carrier with Feature Code
#EC7Q and CCIN 59B4 for AIX, Linux, and VIOS.
This PCIe4 drive is also compatible with PCIe3 slots on the system.
- Support was changed to disable Service Location Protocol
(SLP) by default for newly shipped systems or systems that are reset to
manufacturing defaults. This change has been made to reduce
memory usage on the service processor by disabling a service that is
not needed for normal system operations. This change can be made
manually for existing customers by changing it in ASMI with the options
"ASMI -> System Configuration -> Security -> External Services
Management" to disable the service.
- Support was added to generate a service processor fipsdump
whenever there is Hostboot (HB) TI and HB dump. Without this new
support, a HB crash (with a HB dump) does not generate a fipsdump and
the FSP FFDC at that point in time. So it was difficult to correlate
what was seen in the HB dump to what was happening on the FSP at the
time of the HB fail.
System firmware changes that
affect all systems
- HIPER: A
problem was fixed which may occur on a target system following a Live
Partition Mobility (LPM) migration of an AIX partition utilizing Active
Memory Expansion (AME) with 64 KB page size enabled using the vmo
tunable: "vmo -ro ame_mpsize_support=1". The problem may result
in AIX termination, file system corruption, application segmentation
faults, or undetected data corruption.
Note: If you are doing an LPM migration of an AIX partition
utilizing AME and 64 KB page size enabled involving a POWER8 or POWER9
system, ensure you have a Service Pack including this change for the
appropriate firmware level on both the source and target systems.
- A problem was fixed for a VPD update error with B155A40F
and B181A40F SRCs being logged when the backup service processor is
guarded. This intermittently occurs for the VPD "GD" record for
the service processor during a guard of the backup service processor.
There is no impact to normal operations for this error, but information
about the service processor having been guarded is not recorded in its
VPD.
- A problem was fixed for the error log description for SRC
B150BAEA to include "Clock Card". The SRC is logged when there is
a bad clock card configuration but without the fix, the SRC description
only states that the "Service Processor Firmware encountered an
internal error."
- A problem was fixed for a missing hardware callout and
guard for a processor chip failure with SRC BC8AE540 and signature
"ex(n0p0c5) (L3FIR[28]) L3 LRU array parity error".
- A problem was fixed for processor cores in a node not being
shown as deconfigured in ASMI when the node is deconfigured. A
workaround for this issue is to go to the ASMI "Processor
Deconfiguration" menu and navigate to the second page to get the
Processor Unit level details. By selecting the deconfigured Node
and "Continue" button, ASMI shows the correct information. This
is a further solution to the fix delivered in FW950.10 that was found
to be incomplete and still showed the problem.
- A problem was fixed for a missing hardware callout and
guard for a processor chip failure with Predictive Error (PE) SRC
BC70E540 and signature "ex(n1p2c6) (L2FIR[19]) Rc or NCU Pb data CE
error". The PE error occurs after the number of CE errors reaches
a threshold of 32 errors per day.
- A problem was fixed for an infrequent SRC of B7006956 that
may occur during a system power off. This SRC indicates that
encrypted NVRAM locations failed to synchronize with the copy in memory
during the shutdown of the hypervisor. This error can be ignored as the
encrypted NVRAM information is stored in a redundant location, so the
next IPL of the system is successful.
- A problem was fixed for a service processor mailbox (mbox)
timeout error with SRC B182953C during the IPL of systems with large
memory configurations and "I/O Adapter Enlarged Capacity" enabled from
ASMI. The error indicates that the hypervisor did not respond
quickly enough to a message from the service processor but this may not
result in an IPL failure. The problem is intermittent, so if the
IPL does fail, the workaround is to retry the IPL.
- A problem was fixed for a misleading SRC B7006A20
(Unsupported Hardware Configuration) that can occur for some error
cases for PCIes #EMX0 expansion drawers that are connected with copper
cables. For cable unplug errors, the SRC B7006A88 (Drawer
TrainError) should be shown instead of the B7006A20. If a
B7006A20 is logged against copper cables with the signature "Prc
UnsupportedCableswithFewerChannels" and the message "NOT A 12CHANNEL
CABLE", this error should instead follow the service actions for a
B7006A88 SRC.
- Problems were fixed for DLPAR operations that change the
uncapped weight of a partition and DLPAR operations that switch an
active partition from uncapped to capped. After changing the
uncapped weight, the weight can be incorrect. When switching an
active partition from uncapped to capped, the operation can fail.
- A problem was fixed where the Floating Point Unit
Computational Test, which should be set to "staggered" by default, has
been changed in some circumstances to be disabled. If you wish to
re-enable this option, this fix is required. After applying this
service pack, do the following steps:
1) Sign in to the Advanced System Management Interface (ASMI).
2) Select Floating Point Computational Unit under the System
Configuration heading and change it from disabled to what is needed:
staggered (run once per core each day) or periodic (a specified time).
3) Click "Save Settings".
- A problem was fixed for service processor failovers that
are not successful, causing the HMC to lose communication to the
hypervisor and go into the "Incomplete" state. The error is triggered
when multiple failures occur during a service processor failover,
resulting in an extra host or service processor initiated reset/reload
during the failover, which causes the PSI links to be in the wrong
state at the end of the process.
- A problem was fixed for a hypervisor hang and HMC
Incomplete error as a secondary problem after an SR-IOV adapter has
gone into error recovery for a failure. This secondary failure is
infrequent because it requires an unrecovered error first for an SR-IOV
adapter.
- A problem was fixed for a system termination with SRC
B700F107 following a time facility processor failure with SRC
B700F10B. With the fix, the transparent replacement of the failed
processor will occur for the B700F10B if there is a free core, with no
impact to the system.
- A problem was fixed for an incorrect "Power Good fault" SRC
logged for an #EMX0 PCIe3 expansion drawer on the lower CXP cable of
B7006A85 (AOCABLE, PCICARD). The correct SRC is B7006A86
(PCICARD, AOCABLE).
- A problem was fixed for a Live Partition Mobility (LPM)
migration that failed with the error "HSCL3659 The partition migration
has been stopped because orchestrator detected an error" on the
HMC. This problem is intermittent and rare that is triggered by
the HMC being overrun with unneeded LPM message requests from the
hypervisor that can cause a timeout in HMC queries that result in the
LPM operation being aborted. The workaround is to retry the LPM
migration which will normally succeed.
- A problem was fixed for an SR-IOV adapter in shared mode
configured as Virtual Ethernet Port Aggregator (VEPA) where unmatched
unicast packets were not forwarded to the promiscuous mode VF.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
- A problem was fixed for certain SR-IOV adapters in SR-IOV
Shared mode which may cause a network interruption and SRCs B400FF02
and B400FF04 logged. The problem occurs infrequently during
normal network traffic..
This fix updates the adapter firmware to 11.4.415.38 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
Update instructions: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
- A problem was fixed for the Device Description in a System
Plan related to Crypto Coprocessors and NVMe cards that were only
showing the PCI vendor and device ID of the cards. This is not
enough information to verify which card is installed without looking up
the PCI IDs first. With the fix, more specific/useful information
is displayed and this additional information does not have any adverse
impact on sysplan operations. The problem is seen every time a
System Plan is created for an installed Crypto Coprocessor or NVMe
card.
- A problem was fixed for some serviceable events specific to
the reporting of EEH errors not being displayed on the HMC. The
sending of an associated call home event, however, was not
affected. This problem is intermittent and infrequent.
- A problem was fixed for possible partition errors following
a concurrent firmware update from FW910 or later. A precondition for
this problem is that DLPAR operations of either physical or virtual I/O
devices must have occurred prior to the firmware update. The
error can take the form of a partition crash at some point following
the update. The frequency of this problem is low. If the problem
occurs, the OS will likely report a DSI (Data Storage Interrupt)
error. For example, AIX produces a DSI_PROC log entry. If
the partition does not crash, it is also possible that some subsequent
I/O DLPAR operations will fail.
- A problem was fixed for Platform Error Logs (PELS) not
being logged and shown by the OS if they have an Error Severity code of
"critical error". The trigger is the reporting by a system
firmware subsystem of an error log that has set an Event/Error Severity
in the 'UH' section of the log to a value in the range, 0x50 to
0x5F. The following error logs are affected:
B200308C ==> PHYP ==> A problem occurred during the IPL of
a partition. The adapter type cannot be determined. Ensure that a
valid I/O Load Source is tagged.
B700F104 ==> PHYP ==> Operating System error. Platform
Licensed Internal Code terminated a partition.
B7006990 ==> PHYP ==> Service processor failure
B2005149 ==> PHYP ==> A problem occurred during the IPL of
a partition.
B700F10B ==> PHYP ==> A resource has been disabled due to
hardware problems
A7001150 ==> PHYP ==> System log entry only, no service action
required. No action needed unless a serviceable event was logged.
B7005442 ==> PHYP ==> A parity error was detected in the hardware
Segment Lookaside Buffer (SLB).
B200541A ==> PHYP ==> A problem occurred during a partition
Firmware Assisted Dump
B7001160 ==> PHYP ==> Service processor failure.
B7005121 ==> PHYP ==> Platform LIC failure
BC8A0604 ==> Hostboot ==> A problem occurred during the IPL
of the system.
BC8A1E07 ==> Hostboot ==> Secure Boot firmware
validation failed.
Note that these error logs are still reported to the service processor
and HMC properly. This issue does not affect the Call Home action
for the error logs.
- A problem was fixed for Live Partition Mobility (LPM)
migrations from non-trusted POWER9 systems to POWER10 systems. The LPM
migration failure occurs every time a LPM migration is attempted from a
non-trusted system source to FW1010 and later. For POWER9
systems, non-trusted is the default setting. The messages shown
on the HMC for the failure are the following:
HSCL365C The partition migration has been stopped because
platform firmware detected an error (041800AC).
HSCL365D The partition migration has been stopped because target
MSP detected an error (05000127).
HSCL365D The partition migration has been stopped because target
MSP detected an error (05000127).
A workaround for the problem is to enable the trusted system key on the
POWER9 FW940/FW950 source system which can be done using an intricate
procedure. Please contact IBM Support for help with this
workaround.
- A problem was fixed for a missing error log SRC for an
SR-IOV adapter in Shared mode that fails during the IPL because of
adapter failure or because the system has insufficient memory for
SR-IOV Shared mode for the adapter. The error log SRC added is
B7005308, indicating a serviceable event and providing the adapter and
error information.
- A problem was fixed for a Live Partition Mobility (LPM)
migration failure from a POWER9 FW950 source to a POWER10 FW1010
target. This will fail on every attempt with the following
message on the HMC:
"HSCLA2CF The partition migration has been stopped unexpectedly.
Perform a migration recovery for this partition, if necessary."
- A problem was fixed for error logs not being sent over to
HMC when disconnecting/reconnecting power cords that caused a flood on
informational SRCs of B1818A37 and B18187D7. After the flood of
error logs, the reporting of error logs to the HMC stopped, which also
prevented Call Home from working. To recover from the error, the
service processor can have a reset/reload done using ASMI.
- A problem was fixed for a hardware (HW) dump not being
generated if the system checkstops because of clock card issues of "RSC
OSC" or "PLL unlock attention".
System firmware changes that
affect certain systems
- For a system with a partition running AIX 7.3, a problem
was fixed for running Live Update or Live Partition Mobility
(LPM). AIX 7.3 supports Virtual Persistent Memory (PMEM) but it
cannot be used with these operations, but the problem was making it
appear that PMEM was configured when it was not. The Live Update
and LPM operations always fail when attempted on AIX 7.3. Here is
the failure output from a Live Update Preview:
"1430-296 FAILED: not all devices are virtual devices.
nvmem0
1430-129 FAILED: The following loaded kernel extensions are not known
to be safe for Live Update:
nvmemdd
...
1430-218 The live update preview failed.
0503-125 geninstall: The lvupdate call failed.
Please see /var/adm/ras/liveupdate/logs/lvupdlog for details."
- On systems with only Integrated Facility for Linux ( IFL)
processors and AIX or IBM i partitions, a problem was fixed for
performance issues for IFL VMs (Linux and VIOS). This problem
occurs if AIX or IBM i partitions are active on a system with IPL only
cores. As a workaround, AIX or IBM i partitions should not be
activated on an IFL only system. With the fix, the activation of
AIX and IBM i partitions are blocked on an IFL only system. If
this fix is installed concurrently with AIX or IBM i partitions
running, these partitions will be allowed to continue to run until they
are powered off. Once powered off, the AIX and IBM i partitions
will not be allowed to be activated again on the IFL-only system.
- For systems with an AIX partition and Platform Keystore
(PKS) enabled for the partition, a problem was fixed for AIX not being
able to access the PKS during a Main Store Dump (MSD) IPL. This
may prevent the dump from completing. This will happen for every
MSD IPL when the partition PKS is enabled and in use by the AIX OS.
- For a system with an AIX or Linux partition, a problem was
fixed for a boot hang in RTAS for a partition that owns I/O which uses
MSI-X interrupts. A BA180007 SRC may be logged prior to the
hang. The frequency of this RTAS hang error is very low.
|
VH950_075_045 / FW950.11
06/08/21 |
Impact: Availability
Severity: HIPER
System firmware changes that
affect all systems
- HIPER/Pervasive:
A problem was fixed for a checkstop due to an internal Bus transport
parity error or a data timeout on the Bus. This is a very rare
problem that requires a particular SMP transport link traffic pattern
and timing. Both the traffic pattern and timing are very
difficult to achieve with customer application workloads. The fix
will have no measurable effect on most customer workloads although
highly intensive OLAP-like workloads may see up to 2.5% impact.
|
VH950_072_045 / FW950.10
04/28/21 |
Impact: Availability
Severity: HIPER
New Features and Functions
- Support was added for an updated Clock and Control Card
(LCC) with new FRU number 00E5324 and same CCIN 6B67 to improve
reliability of the system clock generation architecture. This new LLC
card can not be mixed with the old LCC card (old FRU number:
01DH194). FW950.10 is required for this new card which will
be installed in all new IBM Power System 9080-M9S and all new MES nodes
added to existing 9080-M9S systems. MES nodes will come with
spare LCC cards to replace the old 01DH194 LCC cards in existing nodes
and require server upgrade to FW950.10 prior to installation of the MES.
- Support added to Redfish to provide a command to set the
ASMI user passwords using a new AccountService schema.Using this
service, the ASMI admin, HMC, and general user passwords can be changed.
- PowerVM support for the Platform KeyStore (PKS) for
partitions has removed the FW950.00 restriction where the total amount
of PKS for the system that could be configured was limited to 1 MB
across all the partitions. This restriction has been removed for
FW950.10.
- Support was added for Samsung DIMMs with part number
01GY853. If these DIMMs are installed in a system with older
firmware than FW950.10, the DIMMs will fail and be guarded with SRC
BC8A090F logged with HwpReturnCode "
RC_CEN_MBVPD_TERM_DATA_UNSUPPORTED_VPD_ENCODE".
- Support was added for a new service processor command that
can be used to 'lock' the power management mode, such that the mode can
not be changed except by doing a factory reset.
- Support for new mainstream 931 GB, 1.86 TB, 3.72 TB, and
7.44 TB capacity SSDs. A 2.5-inch serial-attached SCSI (SAS) SSD
is mounted on an SFF-3 carrier or tray for a POWER9 system unit or
mounted on an SFF-2 for placement in an expansion drawer, such as the
EXP24SX drawer, when attached to a POWER9 server. The drive is
formatted to use 4224-byte (4k) sectors and does not support the 4k
JBOD 4096-byte sector. Firmware level FW950.10 or later is
required for these drives. The following are the feature codes
and CCINs for the new drives:
#ESKJ/#ESKK with CCIN 5B2B/5B29 – 931 GB Mainstream SAS 4k SFF-3/SFF-2
SSD for AIX/Linux
#ESKL/#ESKM with CCIN 5B2B/5B29 - 931GB Mainstream SAS 4k SFF-3/SFF-2
SSD for IBM i
#ESKN/#ESKP with CCIN 5B20/5B21- 1.86TB Mainstream SAS 4k SFF-3/SFF-2
SSD for AIX/Linux
#ESKQ/#ESKR with CCIN 5B20/5B21- 1.86TB Mainstream SAS 4k SFF-3/SFF-2
SSD for IBM i
#ESKS/#ESKT with CCIN 5B2C/5B2D - 3.72TB Mainstream SAS 4k SFF-3/SFF-2
SSD for AIX/Linux
#ESKU/#ESKV with CCIN 5B2C/5B2D - 3.72TB Mainstream SAS 4k SFF-3/SFF-2
SSD for IBM i
#ESKW/#ESKX with CCIN 5B2E/5B2F- 7.44TB Mainstream SAS 4k SFF-3/SFF-2
SSD for AIX/Linux
#ESKY/#ESKZ with CCIN 5B2E/5B2F -7.44TB Mainstream SAS 4k SFF-3/SFF-2
SSD for IBM i
- Support for new enterprise SSDs refresh the previously
available 387 GB, 775 GB, and 1550 GB capacity points for POWER9
servers. These are 400 GB, 800 GB, and 1600 GB SSDs that are always
formatted either to 4224 (4k) byte sectors or to 528 (5xx) byte sectors
for additional protection, resulting in 387 GB, 775 GB, and 1550 GB
capacities. The 4096-byte sector, the 512-byte sector, and JBOD are not
supported. Firmware level FW950.10 or later is required for these
drives. The following are the feature codes and CCINs for the new
drives:
#ESK0/#ESK1 with CCIN 5B19/ 5B16 - 387GB Enterprise SAS 5xx
SFF-3/SFF-2 SSD for AIX/Linux
#ESK2/#ESK3 with CCIN 5B1A/5B17 - 775GB Enterprise SAS 5xx SFF-3/SFF-2
SSD for AIX/Linux
#ESK6/#ESK8 with CCIN 5B13/5B10.- 387GB Enterprise SAS 4k SFF-3/SFF-2
SSD for AIX/Linux
#ESK7/#ESK9 with CCIN 5B13/5B10- 387GB Enterprise SAS 4k SFF-3/SFF-2
SSD for IBM i
#ESKA/#ESKC with CCIN 5B14/5B11- 775GB Enterprise SAS 4k SFF-3/SFF-2
SSD for AIX/Linux
#ESKB/#ESKD with CCIN 5B14/5B11- 775GB Enterprise SAS 4k SFF-3/SFF-2
SSD for IBM i
#ESKE/#ESKG with CCIN 5B15/5B12- 1.55TB Enterprise SAS 4k SFF-3/SFF-2
SSD for AIX/Linux
#ESKF/#ESKH with CCIN 5B15/5B12- 1.55TB Enterprise SAS 4k SFF-3/SFF-2
SSD for IBM i
- Support for new PCIe 4.0 x8 dual-port 32 Gb optical Fibre
Channel (FC) short form adapter based on the Marvell QLE2772 PCIe host
bus adapter (6.6 inches x 2.731 inches). The adapter provides two ports
of 32 Gb FC capability using SR optics. Each port can provide up to
6,400 MBps bandwidth. This adapter has feature codes #EN1J/#EN1K with
CCIN 579C. Firmware level FW950.10 or later is required for this
adapter.
- Support for new PCIe 3.0 16 Gb quad-port optical Fibre
Channel (FC)l x8 short form adapter based on the Marvell QLE2694L PCIe
host bus adapter (6.6 inches x 2.371 inches). The adapter provides four
ports of 16 Gb FC capability using SR optics. Each port can provide up
to 3,200 MBps bandwidth. This adapter has feature codes #EN1E/#EN1F
with CCIN 579A. Firmware level FW950.10 or later is required for this
adapter.
- Support for Enterprise 800 GB PCIe4 NVMe SFF U.2 15mm SSD
for IBM i. The SSD can be used in any U.2 15mm NVMe slot in the system.
This drive has feature code #ES1K and CCIN 5947. Firmware level
FW950.10 or later is required for this drive.
System firmware changes that
affect all systems
- A problem was fixed
for certain POWER Interface Bus (PIB) errors on the internal bus
between the processor and the memory buffer chip. This problem results
in BC200D01 logged without a callout and deconfiguration of the failing
FRU containing the memory buffer chip. This problem can
result in an entire node failing to IPL instead of just having the
failing FRU deconfigured.
- HIPER/Pervasive:
A problem was fixed for unrecoverable (UE) SRCs B150BA40 and B181BE12
being logged for a Hostboot TI (due to no actual fault), causing nodes
to be deconfigured and the system to re-IPL with reduced resources. The
problem can be triggered during a firmware upgrade or disruptive
firmware update. The problem can also occur on the first IPL after a
concurrent firmware update. The problem can also occur outside of a
firmware update scenario for some reconfiguration loops that can happen
in Hostboot. There is also a visible callout indicating one or more
nodes/backplanes have a problem which can lead to unnecessary repairs.
- A problem was fixed for certain SR-IOV adapters that have a
rare, intermittent error with B400FF02 and B400FF04 logged, causing a
reboot of the VF. The error is handled and recovered without any user
intervention needed. The SR-IOV adapters affected have the following
Feature Codes and CCINs: #EC2R/#EC2S with CCIN 58FA; #EC2T/#EC2U with
CCIN 58FB; #EC3L/#EC3M with CCIN 2CE; and #EC66/#EC67 with CCIN 2CF3.
- A problem was fixed for the backup service processor
failing to IPL and going to terminate state with a B181460B SRC
logged. There is no impact to the running system except that the
service processor redundancy is lost until the next IPL can restore it.
This is a rare problem and it can be resolved by a re-IPL of the system.
- A problem was fixed for initiating a Remote Restart from a
PowerVC/NovaLink source system to a remote target. This happens
whenever the source system is running FW950.00. The error would look
like this from PowerVC (system name, release level would be specific to
the environment):
"Virtual machine RR-5 could not be remote restarted to
Ubu_AX_9.114.255.10. Error message: PowerVM API failed to complete for
instance=RR-5-71f5c2cf-0000004e.HTTP error 500 for method PUT on path
/rest/api/uom/ManagedSystem/598c1be4-cb4c-3957-917d-327b764d8ac1/LogicalPartition:
Internal Server Error -- [PVME01040100-0004] Internal error
PVME01038003 occurred while trying to perform this command."
- A problem was fixed for a B1502616 SRC logged after a
system is powered off. This rare error, "A critical error
occurred on the thermal/power management device (TPMD); it is being
disabled. " is not a real problem but occurred because the Power
Management (PM) complex was being reset during the power off. No
recovery is needed as the next IPL of the system is successful.
- A problem was fixed for the error handling of a system with
an unsupported memory configuration that exceeds available memory
power. Without the fix, the IPL of the system is attempted and fails
with a segmentation fault with SRCs B1818611 and B181460B logged that
do not call out the incorrect DIMMs.
- A problem was fixed for an error in the HMC GUI (Error
launching task) when clicking on "Hardware Virtualized IO". This error
is infrequent and is triggered by an optical cable to a PCIe3 #EMX0
expansion drawer that is failed or unplugged. With the fix, the
HMC can show the working I/O adapters.
- A problem was fixed for performance degradation of a
partition due to task dispatching delays. This may happen when a
processor chip has all of its shared processors removed and converted
to dedicated processors. This could be driven by DLPAR remove of
processors or Dynamic Platform Optimization (DPO).
- A problem was fixed for an unrecoverable UE SRC B181BE12
being logged if a service processor message acknowledgment is sent to a
Hostboot instance that has already shutdown. This is a harmless error
log and it should have been marked as an informational log.
- A problem was fixed for Time of Day (TOD) being lost for
the real-time clock (RTC) with an SRC B15A3303 logged when the service
processor boots or resets. This is a very rare problem that
involves a timing problem in the service processor kernel. If the
server is running when the error occurs, there will be an SRC B15A3303
logged, and the time of day on the service processor will be incorrect
for up to six hours until the hypervisor synchronizes its (valid) time
with the service processor. If the server is not running when the
error occurs, there will be an SRC B15A3303 logged, and If the server
is subsequently IPLed without setting the date and time in ASMI to fix
it, the IPL will abort with an SRC B7881201 which indicates to the
system operator that the date and time are invalid.
- A problem was fixed for the Systems Management Services (
SMS) menu "Device IO Information" option being incorrect when
displaying the capacity for an NVMe or Fibre Channel (FC) NVMe disk.
This problem occurs every time the data is displayed.
- A problem was fixed for intermittent failures for a reset
of a Virtual Function (VF) for SR-IOV adapters during Enhanced Error
Handling (EEH) error recovery. This is triggered by EEH events at a VF
level only, not at the adapter level. The error recovery fails if a
data packet is received by the VF while the EEH recovery is in
progress. A VF that has failed can be recovered by a partition reboot
or a DLPAR remove and add of the VF.
- A problem was fixed for a logical partition activation
error that can occur when trying to activate a partition when the
adapter hardware for an SR-IOV logical port has been physically removed
or is unavailable due to a hardware issue. This message is reported on
the HMC for the activation failure: "Error: HSCL12B5 The
operation to remove SR-IOV logical port <number> failed because
of the following error: HSCL1552 The firmware operation failed with
extended error" where the logical port number will vary. This is
an infrequent problem that is only an issue if the adapter hardware has
been removed or another problem makes it unavailable. The
workaround for this problem is to physically add the hardware back in
or correct the hardware issue. If that cannot be done, create an
alternate profile for the logical partition without the SR-IOV logical
port and use that until the hardware issue is resolved.
- A problem was fixed for an Out Of Memory (OOM) error on the
primary service processor when redundancy is lost and the backup
service processor has failed. This error causes a reset/reload of the
remaining service processor. This error is triggered by a flood of
informational logs that can occur when links are lost to a failed
service processor.
- A problem was fixed for incomplete periodic data gathered
by IBM Service for #EMXO PCIe expansion drawer predictive error
analysis. The service data is missing the PLX (PCIe switch) data
that is needed for the debug of certain errors.
- A problem was fixed for a partition hang in shutdown with
SRC B200F00F logged. The trigger for the problem is an
asynchronous NX accelerator job (such as gzip or NX842 compression) in
the partition that fails to clean up successfully. This is intermittent
and does not cause a problem until a shutdown of the partition is
attempted. The hung partition can be recovered by performing an LPAR
dump on the hung partition. When the dump has been completed, the
partition will be properly shut down and can then be restarted without
any errors.
- A problem was fixed for a rare failure for an SPCN I2C
command sent to a PCIe I/O expansion drawer that can occur when service
data is manually collected with hypervisor macros "xmsvc -dumpCCData
and xmsvc -logCCErrBuffer". If the hypervisor macro "xmsvc
"is run to gather service data and a CMC Alert occurs at the same time
that requires an SPCN command to clear the alert, then the I2C commands
may be improperly serialized, resulting in an SPCN I2C command
failure. To prevent this problem, avoid using xmsvc
-dumpCCData and xmsvc -logCCErrBuffer to collect service data until
this fix is applied.
- The following problems were fixed for certain SR-IOV
adapters:
1) An error was fixed that occurs during a VNIC failover where the VNIC
backing device has a physical port down or read port errors with an SRC
B400FF02 logged.
2) A problem was fixed for adding a new logical port that has a PVID
assigned that is causing traffic on that VLAN to be dropped by other
interfaces on the same physical port which uses OS VLAN tagging for
that same VLAN ID. This problem occurs each time a logical port
with a non-zero PVID that is the same as an existing VLAN is
dynamically added to a partition or is activated as part of a partition
activation, the traffic flow stops for other partitions with OS
configured VLAN devices with the same VLAN ID. This problem can
be recovered by configuring an IP address on the logical port with the
non-zero PVID and initiating traffic flow on this logical port.
This problem can be avoided by not configuring logical ports with a
PVID if other logical ports on the same physical port are configured
with OS VLAN devices.
This fix updates the adapter firmware to 11.4.415.37 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed for processor cores in the first node
not being shown as deconfigured in ASMI when the first node is
deconfigured. A workaround for this issue is to go to the ASMI
"Processor Deconfiguration" menu and navigate to the second page to get
the Processor Unit level details. By selecting the Node 1 and
"Continue" button, ASMI shows the correct information.
- A problem was fixed for the NVMe slots not including the
Concurrent Maintenance Card (Un-P1-C63) in the FRU callout list. If
there is an error path involving the Concurrent Maintenance Card, there
will not be a FRU callout in the error log for it. For example,
B7006922 power fault errors for NVMe drive slots on P2 will not include
the P1-C63 concurrent maintenance card in the FRU callout list.
As a workaround, the symbolic FRU information can be analyzed to
determine the actual FRU.
- A problem was fixed for a system hang or terminate with SRC
B700F105 logged during a Dynamic Platform Optimization (DPO) that is
running with a partition in a failed state but that is not shut
down. If DPO attempts to relocate a dedicated processor from the
failed partition, the problem may occur. This problem can be
avoided by doing a shutdown of any failed partitions before initiating
DPO.
- A problem was fixed for a system crash with HMC message
HSCL025D and SRC B700F103 logged on a Live Partition Mobility (LPM)
inactive migration attempt that fails. The trigger for this problem is
inactive migration that fails a compatibility check between the source
and target systems.
- A problem was fixed for time-out issues in Power Enterprise
Pools 1.0 (PEP 1.0) that can affect performance by having non-optimal
assignments of processors and memory to the server logical partitions
in the pool. For this problem to happen, the server must be in a PEP
1.0 pool and the HMC must take longer than 2 minutes to provide the
PowerVM hypervisor with the information about pool resources owned by
this server. The problem can be avoided by running the HMC optmem
command before activating the partitions.
- A problem was fixed for certain SR-IOV adapters not being
able to create the maximum number of VLANs that are supported for a
physical port. There were insufficient memory pages allocated for the
physical functions for this adapter type. The SR-IOV adapters affected
have the following Feature Codes and CCINs: #EC66/#EC67 with CCIN
2CF3.
- A problem was fixed for certain SR-IOV adapters that can
have B400FF02 SRCs logged with LPA dumps during a vNIC remove
operation. The adapters can have issues with a deadlock in
managing memory pages. In most cases, the operations should
recover and complete. This fix updates the adapter firmware to
XX.29.2003 for the following Feature Codes and CCINs: #EC2R/EC2S with
CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CE; and
#EC66/EC67 with CCIN 2CF3.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed for a B150BA3C SRC callout that was
refined by adding a new isolation procedure to improve the accuracy of
the repair and reduce its impact on the system. One
possible trigger for the problem could be a DIMM failure in a node
during an IPL with SRC BC200D01 logged followed by a B150BA3C.
Without the fix, the system backplane is called out and a node of the
system is deconfigured. With the fix, a new isolation service
procedure FSPSP100 is added as a high priority callout and the system
backplane callout is made low priority.
"FSPSP100- A problem has been detected in the Hostboot firmware.
1. Look for previous event log(s) for same CEC drawer and replace that
hardware.
2. If no other event log exists, then submit all dumps and iqyylog for
review."
System firmware changes that
affect certain systems
- On systems with an IBM i partition, a problem was fixed for
only seeing 50% of the total Power Enterprise Pools (PEP) 1.0 memory
that is provided. This happens when querying resource information
via QAPMCONF which calls MATMATR 0x01F6. With the fix, an error is
corrected in the IBM i MATMATR option 0X01F6 that retrieves the memory
information for the Collection Services.
- On systems with an IBM i partition, a problem was fixed for
physical I/O property data not being able to be collected for an
inactive partition booted in "IOR" mode with SRC B200A101 logged. This
can happen when making a system plan (sysplan) for an IBM i partition
using the HMC and the IBM i partition is inactive. The sysplan data
collection for the active IBM i partitions is successful.
|
VH950_045_045 / FW950.00
11/23/20 |
Impact:
New
Severity: New
GA Level with key features
included listed below
- All features and fixes from the FW930.30 and FW940.20
service packs (and below) are included in this release.
New Features and Functions
- Host firmware support for anti-rollback protection.
This feature implements firmware anti-rollback protection as described
in NIST SP 800-147B "BIOS Protection Guidelines for Servers".
Firmware is signed with a "secure version". Support added
for a new menu in ASMI called "Host firmware security policy" to update
this secure version level at the processor hardware. Using this
menu, the system administrator can enable the "Host firmware secure
version lock-in" policy, which will cause the host firmware to update
the "minimum secure version" to match the currently running firmware.
Use the "Firmware Update Policy" menu in ASMI to show the current
"minimum secure version" in the processor hardware along with the
"Minimum code level supported" information. The secure boot
verification process will block installing any firmware secure version
that is less than the "minimum secure version" maintained in the
processor hardware.
Prior to enabling the "lock-in" policy, it is recommended to accept the
current firmware level.
WARNING: Once lock-in is enabled and the system is booted, the "minimum
secure version" is updated and there is no way to roll it back to allow
installing firmware releases with a lesser secure version.
Note: If upgrading from FW930.30 or FW940.20, this feature is
already applied.
- This server firmware level includes the SR-IOV adapter
firmware level 11.4.415.33 for the following Feature Codes and CCINs:
#EN15/EN16 with CCIN 2CE3, #EN17/EN18 with CCIN 2CE4, #EN0H/EN0J with
CCIN 2B93, #EN0M/EN0N with CCIN 2CC0, and #EN0K/EN0L with CCIN 2CC1.
- This server firmware includes the SR-IOV adapter firmware
level 1x.25.6100 for the following Feature Codes and CCINs: #EC2R/EC2S
with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC;
and #EC66/EC67 with CCIN 2CF3.
- Support added for IBM i 7.1 (Tech Refresh 11 + PTFs) for
restricted I/O only.
- Support for PCIe4 x8 1.6/3.2/6.4 TB NVMe Adapters that are
Peripheral Component Interconnect Express (PCIe) Generation 4 (Gen4) x8
adapters with the following feature codes and CCINs:
#EC7A/#EC7B with CCIN 594A ; #EC7C/#EC7D with CCIN 594B; and
#EC7E/#EC7F with CCIN 594C for AIX/Linux.
#EC7J/#EC7K with CCIN 594A ; #EC7L/#EC7M with CCIN 594B; and
#EC7N/#EC7P with CCIN 594C for IBM i.
- PowerVM boot support for AIX for NVMe over Fabrics (NVMf)
for 32Gb Fibre Channel. Natively attached adapters are supported
with the following feature codes and CCINs: #EN1A/#EN1B with CCIN 578F.
- Support added for a PCIe2 2-Port USB 3.0 adapter with the
following feature codes and CCIN: #EC6J/#EC6K with CCIN 590F.
- Support added for dedicated processor partitions in IBM
Power Enterprise Pools (PEP) 2.0. Previously, systems added to
PEP 2.0 needed to have all partitions as shared processor partitions.
- Support added for SR-IOV Hybrid Network Virtualization
(HNV) for Linux. This capability allows a Linux partition
to take advantage of the efficiency and performance benefits of SR-IOV
logical ports and participate in mobility operations such as active and
inactive Live Partition Mobility (LPM) and Simplified Remote Restart
(SRR). HNV is enabled by selecting a new Migratable option when
an SR-IOV logical port is configured. The Migratable option is used to
create a backup virtual device. The backup virtual device must be
a Virtual Ethernet adapter (virtual Network Interface Controller (vNIC)
adapter not supported as a backup device). In addition to this
firmware, HNV support in a production environment requires HMC
9.1.941.0 or later, RHEL 8., SLES 15, and VIOS 3.1.1.20 or later.
- Enhanced Dynamic DMA Window (DDW) for I/O adapter
slots to enable the OS to use 64KB TCEs. The OS supported
is Linux RHEL 8.3 LE.
- PowerVM support for the Platform KeyStore (PKS) for
partitions. PowerVM has added new h-call interfaces allowing the
partition to interact with the Platform KeyStore that is maintained by
PowerVM. This keystore can be used by the partition to store
items requiring confidentiality or integrity like encryption keys or
certificates.
Note: The total amount of PKS for the system is limited to 1 MB
across all the partitions for FW950.00.
System firmware changes that
affect all systems
- HIPER/Pervasive: A
problem was fixed for a system checkstop with an SRC BC14E540 logged
that can occur during certain SMP cable failure scenarios.
- HIPER/Pervasive:
A problem was fixed for soft error recovery not working in the
DPSS (Digital Power Subsystem Sweep) programmable power controller that
results in the DPSS being called out as a failed FRU. However,
the DPSS is recovered on the next IPL of the system.
- HIPER/Pervasive:
A problem was fixed to be able to detect a failed PFET sensing circuit
in a core at runtime, and prevent a system fail with an incomplete
state when a core fails to wake up. The failed core is detected
on the subsequent IPL. With the fix. a core is called out with
the PFET failure with SRC BC13090F and hardware description "CME
detected malfunctioning of PFET headers." to isolate the error better
with a correct callout.
- A problem was fixed for system UPIC cable validation not
being able to detect cross-plugged UPIC cables. If the cables are
plugged incorrectly and there is a need for service, modifying the
wrong FRU locations can have adverse effects on the system, including
system outage. The cable status that is displayed is the result of the
last cable validation that occurred. Cable validation occurs
automatically during system power on.
Note: If upgrading from FW930.30, this fix is already applied.
- A problem was fixed for a VIOS, AIX, or Linux partition
hang during an activation at SRC CA000040. This will occur on a
system that has been running more than 814 days when the boot of the
partition is attempted if the partitions are in POWER9_base or POWER9
processor compatibility mode.
A workaround to this problem is to re-IPL the system or to change the
failing partition to POWER8 compatibility mode.
Note: If upgrading from FW930.30, this fix is already applied.
|