01VH920_123_101.html Power9 System Firmware
Applies to: 9080-M9S
This document provides information about the installation of Licensed Machine
or Licensed Internal Code, which is sometimes referred to generically as
microcode or firmware.
----------------------------------------------------------------------------------
Contents
* 1.0 Systems Affected
* 1.1 Minimum HMC Code Level
* 2.0 Important Information
* 2.1 IPv6 Support and Limitations
* 2.2 Concurrent Firmware Updates
* 2.3 Memory Considerations for Firmware Upgrades
* 2.4 NIM install issue using SR-IOV shared mode at FW 920.20, 920.21, and
920.22
* 2.5 SBE Updates
* 3.0 Firmware Information
* 3.1 Firmware Information and Description Table
* 4.0 How to Determine Currently Installed Firmware Level
* 5.0 Downloading the Firmware Package
* 6.0 Installing the Firmware
* 7.0 Firmware History
----------------------------------------------------------------------------------
1.0 Systems Affected
This package provides firmware for Power Systems E980 (9080-M9S) servers only.
The firmware level in this package is:
* VH920_123 / FW920.60
----------------------------------------------------------------------------------
1.1 Minimum HMC Code Level
This section is intended to describe the "Minimum HMC Code Level" required
by the System Firmware to complete the firmware installation process. When
installing the System Firmware, the HMC level must be equal to or higher than
the "Minimum HMC Code Level" before starting the system firmware update. If
the HMC managing the server targeted for the System Firmware update is running
a code level lower than the "Minimum HMC Code Level" the firmware update will
not proceed.
The Minimum HMC Code levels for this firmware for HMC x86, ppc64 or ppc64le
are listed below.
x86 - This term is used to reference the legacy HMC that runs on
x86/Intel/AMD hardware for both the 7042 Machine Type appliances and the
Virtual HMC that can run on the Intel hypervisors (KVM, VMWare, Xen).
* The Minimum HMC Code level for this firmware is: HMC V9R1M921 (PTF MH01789)
.
* Although the Minimum HMC Code level for this firmware is listed above,
V9R2, HMC V9R2M951.2 (PTF MH01892) or higher is recommended to avoid an issue
that can cause the HMC to lose connections to all servers for a brief time with
service events E2FF1409 and E23D040A being reported. This will cause all
running server tasks such as server firmware upgrade to fail. ppc64 or ppc64le
- describes the Linux code that is compiled to run on Power-based servers or
LPARS (Logical Partitions)
* The Minimum HMC Code level for this firmware is: HMC V9R1M921 (PTF MH01790)
.
* Although the Minimum HMC Code level for this firmware is listed above,
V9R2, HMC V9R2M951.2 (PTF MH01892) or higher is recommended to avoid an issue
that can cause the HMC to lose connections to all servers for a brief time with
service events E2FF1409 and E23D040A being reported. This will cause all
running server tasks such as server firmware upgrade to fail.
For information concerning HMC releases and the latest PTFs, go to the
following URL to access Fix Central:
http://www-933.ibm.com/support/fixcentral/
For specific fix level information on key components of IBM Power Systems
running the AIX, IBM i and Linux operating systems, we suggest using the Fix
Level Recommendation Tool (FLRT):
http://www14.software.ibm.com/webapp/set2/flrt/home
NOTES:
-You must be logged in as hscroot in order for the firmware
installation to complete correctly.
- Systems Director Management Console (SDMC) does not support
this System Firmware level.
2.0 Important Information
Downgrading firmware from any given release level to an earlier release
level is not recommended.
If you feel that it is necessary to downgrade the firmware on your system to
an earlier release level, please contact your next level of support.
2.1 IPv6 Support and Limitations
IPv6 (Internet Protocol version 6) is supported in the System Management
Services (SMS) in this level of system firmware. There are several limitations
that should be considered. When configuring a network interface card (NIC) for
remote IPL, only the most recently configured protocol (IPv4 or IPv6) is
retained. For example, if the network interface card was previously configured
with IPv4 information and is now being configured with IPv6 information, the
IPv4 configuration information is discarded.
A single network interface card may only be chosen once for the boot device
list. In other words, the interface cannot be configured for the IPv6 protocol
and for the IPv4 protocol at the same time.
2.2 Concurrent Firmware Updates
Concurrent system firmware update is supported on HMC Managed Systems only.
Ensure that there are no RMC connections issues for any system partitions
prior to applying the firmware update. If there is a RMC connection failure to
a partition during the firmware update, the RMC connection will need to be
restored and additional recovery actions for that partition will be required to
complete partition firmware updates.
2.3 Memory Considerations for Firmware Upgrades
Firmware Release Level upgrades and Service Pack updates may consume
additional system memory.
Server firmware requires memory to support the logical partitions on the
server. The amount of memory required by the server firmware varies according
to several factors.
Factors influencing server firmware memory requirements include the following:
* Number of logical partitions
* Partition environments of the logical partitions
* Number of physical and virtual I/O devices used by the logical
partitions
* Maximum memory values given to the logical partitions
Generally, you can estimate the amount of memory required by server firmware
to be approximately 8% of the system installed memory. The actual amount
required will generally be less than 8%. However, there are some server models
that require an absolute minimum amount of memory for server firmware,
regardless of the previously mentioned considerations.
Additional information can be found at:
https://www.ibm.com/support/knowledgecenter/9080-M9S/p9hat/p9hat_lparmemory.htm
2.4 NIM install issue using SR-IOV shared mode at FW 920.20, 920.21, and 920.22
A defect in the adapter firmware for the following Feature Codes: EN15,
EN16, EN17, EN18, EN0H, EN0J, EN0K, and EN0L was included in IBM Power Server
Firmware levels 920.20, 920.21, and 920.22. This defect causes attempts to
perform NIM installs using a Virtual Function (VF) to hang or fail.
Circumvention options for this problem can be found at the following link:
http://www.ibm.com/support/docview.wss?uid=ibm10794153
2.5 SBE Updates
Power 9 servers contain SBEs (Self Boot Engines) and are used to boot the
system. SBE is internal to each of the Power 9 chips and used to "self boot"
the chip. The SBE image is persistent and is only reloaded if there is a
system firmware update that contains a SBE change. If there is a SBE change
and system firmware update is concurrent, then the SBE update is delayed to the
next IPL of the CEC which will cause an additional 3-5 minutes per processor
chip in the system to be added on to the IPL. If there is a SBE change and the
system firmware update is disruptive, then SBE update will cause an additional
3-5 minutes per processor chip in the system to be added on to the IPL. During
the SBE update process, the HMC or op-panel will display service processor code
C1C3C213 for each of the SBEs being updated. This is a normal progress code
and system boot should be not be terminated by the user. Additional time
estimate can be between 12-20 minutes per drawer or up to 48-80 minutes for
maximum configuration.
The SBE image is only updated with this service pack if the starting firmware
level is less than FW920.40.
----------------------------------------------------------------------------------
3.0 Firmware Information
Use the following examples as a reference to determine whether your
installation will be concurrent or disruptive.For systems that are not managed
by an HMC, the installation of system firmware is always disruptive.
Note: The concurrent levels of system firmware may, on occasion, contain
fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can
be installed concurrently, but will not be activated until the next IPL.
Partition-Deferred fixes can be installed concurrently, but will not be
activated until a partition reactivate is performed. Deferred and/or
Partition-Deferred fixes, if any, will be identified in the "Firmware Update
Descriptions" table of this document.For these types of fixes (Deferred and/or
Partition-Deferred) within a service pack, only the fixes in the service pack
which cannot be concurrently activated are deferred.
Note: The file names and service pack levels used in the following examples
are for clarification only, and are not necessarily levels that have been, or
will be released.
System firmware file naming convention:
01VHxxx_yyy_zzz
* xxx is the release level
* yyy is the service pack level
* zzz is the last disruptive service pack level NOTE: Values of service pack
and last disruptive service pack level (yyy and zzz) are only unique within a
release level (xxx). For example, 01VH900_040_040 and 01VH910_040_045 are
different service packs.
An installation is disruptive if:
* The release levels (xxx) are different. Example:
Currently installed release is 01VH900_040_040, new release is 01VH910_050_050.
* The service pack level (yyy) and the last disruptive service pack level
(zzz) are the same. Example: VH910_040_040 is disruptive, no
matter what level of VH910 is currently installed on the system.
* The service pack level (yyy) currently installed on the system is lower
than the last disruptive service pack level (zzz) of the service pack to be
installed. Example: Currently installed service pack is
VH910_040_040 and new service pack is VH910_050_045.
An installation is concurrent if:
The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or
higher than the last disruptive service pack level (zzz) of the service pack to
be installed.
Example: Currently installed service pack is VH910_040_040, new service pack
is VH910_041_040.
3.1 Firmware Information and Description
Filename Size Checksum md5sum 01VH920_123_101.rpm 132136316 17751
112fcb486bcd74415e7df0103e9e2f29
Note: The Checksum can be found by running the AIX sum command against the
rpm file (only the first 5 digits are listed).
ie: sum 01VH920_123_101.rpm
VH920
For Impact, Severity and other Firmware definitions, Please refer to the
below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs
The complete Firmware Fix History for this Release Level can be reviewed at
the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VH-Firmware-Hist.html
VH920_123_101 / FW920.60
06/30/20 Impact: Availability Severity: HIPER New features and functions
* Support was added for redundant VPD EEPROMs. If the primary module VPD
EEPROM fails, the system will automatically change to the backup module.
* Support was added for real-time data capture for PCIe3 expansion drawer
(#EMX0) cable card connection data via resource dump selector on the HMC or in
ASMI on the service processor. Using the resource selector string of "xmfr
-dumpccdata" will non-disruptively generate an RSCDUMP type of dump file that
has the current cable card data, including data from cables and the retimers.
System firmware changes that affect all systems
* HIPER/Pervasive: A problem was fixed for an HMC "Incomplete" state for a
system after the HMC user password is changed with ASMI on the service
processor. This problem can occur if the HMC password is changed on the
service processor but not also on the HMC, and a reset of the service processor
happens. With the fix, the HMC will get the needed "failed authentication"
error so that the user knows to update the old password on the HMC.
* DEFERRED: A problem was fixed for a processor core failure with SRCs
B150BA3C and BC8A090F logged that deconfigures the entire processor for the
current IPL. A re-IPL of the system will recover the lost processor with only
the bad core guarded.
* A problem was fixed for the green power LED on the System Control Unit
(SCU) not being lit even though the system is powered on. Without the fix, the
LED is always in the off state.
* A rare problem was fixed for a checkstop during an IPL that fails to
isolate and guard the problem core. An SRC is logged with B1xxE5xx and an
extended hex word 8 xxxxDD90. With the fix, the failing hardware is guarded
and a node is possibly deconfigured to allow the subsequent IPLs of the system
to be successful.
* A problem was fixed for a B7006A96 fanout module FPGA corruption error that
can occur in unsupported PCIe3 expansion drawer(#EMX0) configurations that mix
an enhanced PCIe3 fanout module (#EMXH) in the same drawer with legacy PCIe3
fanout modules (#EMXF, #EMXG, #ELMF, or #ELMG). This causes the FPGA on the
enhanced #EMXH to be updated with the legacy firmware and it becomes a
non-working and unusable fanout module. With the fix, the unsupported #EMX0
configurations are detected and handled gracefully without harm to the FPGA on
the enhanced fanout modules.
* A problem was fixed for system memory not returned after create and delete
of partitions, resulting in slightly less memory available after configuration
changes in the systems. With the fix, an IPL of the system will recover any of
the memory that was orphaned by the issue.
* A problem was fixed to allow quicker recovery of PCIe links for the #EMXO
PCIe expansion drawer for a run time fault with B7006A22 logged. The time for
recovery attempts can exceed six minutes on rare occasions which may cause I/O
adapter failures and failed nodes. With the fix, the PCIe links will recover
or fail faster (in the order of seconds) so that redundancy in a cluster
configuration can be used with failure detection and failover processing by
other hosts, if available, in the case where the PCIe links fail to recover.
* A problem was fixed for certain large I/O adapter configurations having the
PCI link information truncated on the PCI-E topology display shown with ASMI
and the HMC. Because of the truncation, individual adapters may be missing on
the PCI-E topology screens.
* A problem was fixed for extraneous B400FF01 and B400FF02 SRCs logged when
moving cables on SR-IOV adapters. This is an infrequent error that can occur
if the HMC performance monitor is running at the same time the cables are
moved. These SRCs can be ignored when accompanied by cable movement.
* A problem was fixed for certain SR-IOV adapters that can have an adapter
reset after a mailbox command timeout error.
This fix updates the adapter firmware to 11.2.211.39 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for SR-IOV adapters having an SRC B400FF04 logged when
a VF is reset. This is an infrequent issue and can occur for a Live Partition
Mobility migration of a partition or during vNIC (Virtual Network Interface
Controller) failovers where many resets of VFs are occurring. This error is
recovered automatically with no impact on the system.
* A problem was fixed for certain SR-IOV adapters that do not support the
"Disable Logical Port" option from the HMC but the HMC was allowing the user to
select this, causing incorrect operation. The invalid state of the logical
port causes an "Enable Logical Port" to fail in a subsequent operation. With
the fix, the HMC provides the message that the "Disable Logical Port" is not
supported for the adapter. This affects the adapters with the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
* A problem was fixed for a loss of service processor redundancy after a
failover to the backup on a Hostboot IPL error. Although the failover is
successful to the backup service processor, the primary service processor may
terminate. The service processor can be recovered from termination by using a
soft reset from ASMI.
* A problem was fixed for an IPL failure with the following possible SRCs
logged: 11007611, 110076x1, 1100D00C, and 110015xx. The service processor may
reset/reload for this intermittent error and end up in the termination state.
* A problem was fixed for Trusted Platform Module (TPM) hardware failures not
causing SRCs to logged with a call out if the system is configured in ASMI to
not require TPM for the IPL. If this error occurs, the user would not find out
about it until they needed to run with TPM on the IPL. With the fix, the error
logs and notifications will occur regardless of how the TPM is configured.
* A problem was fixed for an intermittent IPL failure calling out the system
planar. There is no hardware error here and another IPL of the system should
be successful. With the fix, a corrupt error message in hostboot from a prior
power off of the system is not allowed to be processed, so the next IPL of the
system is not adversely affected.
* A problem was fixed for an intermittent IPL failure with SRC B181E540
logged with fault signature " ex(n2p1c0) (L2FIR[13]) NCU Powerbus data
timeout". No FRU is called out. The error may be ignored and the re-IPL is
successful. The error occurs very infrequently. This is the second iteration
of the fix that has been released. Expedient routing of the Powerbus
interrupts did not occur in all cases in the prior fix, so the timeout problem
was still occurring.
* A problem was fixed for clock card errors not being called out in the error
log when the primary clock card fails. This problem makes it more difficult
for the system user to be aware that clock card redundancy has been lost, and
that service is needed to restore the redundancy.
* A problem was fixed for a failed clock card causing a node to be guarded
during the IPL of a multi-node system. With the fix, the redundant clock card
allows all the nodes to IPL in the case of a single clock card failure.
* A problem was fixed for the REST/Redfish interface to change the success
return code for object creation from "200" to "201". The "200" status code
means that the request was received and understood and is being processed. A
"201" status code indicates that a request was successful and, as a result, a
resource has been created. The Redfish Ruby Client, "redfish_client" may fail
a transaction if a "200" status code is returned when "201" is expected.
* A problem was fixed for a hypervisor error during system shutdown where a
B7000602 SRC is logged and the system may also briefly go "Incomplete" on the
HMC but the shutdown is successful. The system will power back on with no
problems so the SRC can be ignored if it occurred during a shutdown.
* A problem was fixed for utilization statistics for commands such as HMC
lslparutil and third-party lpar2rrd that do not accurately represent CPU
utilization. The values are incorrect every time for a partition that is
migrated with Live Partition Mobility (LPM). Power Enterprise Pools 2.0 is not
affected by this problem. If this problem has occurred, here are three possible
recovery options:
1) Re-IPL the target system of the migration.
2) Or delete and recreate the partition on the target system.
3) Or perform an inactive migration of the partition. The cycle values get
zeroed in this case.
System firmware changes that affect certain systems
* On systems with an IBM i partition, a problem was fixed that occurs after a
Live Partition Mobility (LPM) of an IBM i partition that may cause issues
including dispatching delays and the inability to do further LPM operations of
that partition. The frequency of this problem is rare. A partition
encountering this error can be recovered with a reboot of the partition.
* On systems with an IBM i partition, a problem was fixed for a possibly
incorrect number of Memory COD (Capacity On Demand) resources shown when
gathering performance data with IBM i Collection Services. Memory resources
activated by Power Enterprise Pools (PEP) 1.0 will be missing from the data.
An error was corrected in the IBM i MATMATR option 0X01F6 that retrieves the
Memory COD information for the Collection Services.
* On systems with Integrated Facility for Linux ( IFL) processors and
Linux-only partitions, a problem was fixed for Power Enterprise Pools (PEP) 1.0
not going back into "Compliance" when resources are moved from Server 1 to
Server 2, causing an expected "Approaching Out Of Compliance", but not
automatically going back into compliance when the resources are no longer used
on Server 1. As a circumvention, the user can do an extra "push" and "pull" of
one resource to make the Pool discover it is back in "Compliance".
* On systems with an IBM i partition, a problem was fixed for a dedicated
memory IBM i partition running in P9 processor compatibility mode failing to
activate with HSCL1552 "the firmware operation failed with extended error".
This failure only occurs under a very specific scenario - the new amount of
desired memory is less than the current desired memory, and the Hardware Page
Table (HPT) size needs to grow. VH920_118_101 / FW920.50
11/21/19 Impact: Availability Severity: HIPER System firmware changes
that affect all systems
* HIPER/Pervasive: A problem was fixed for a possible system crash and HMC
"Incomplete" state when a logical partition (LPAR) power off after a dynamic
LPAR (DLPAR) operation fails for a PCIe adapter. This scenario is likely to
occur during concurrent maintenance of PCIe adapters or for #EMX0 components
such as PCIe3 Cable adapters, Active Optical or copper cables, fanout modules,
chassis management cards, or midplanes. The DLPAR fail can leave page table
mappings active for the adapter, causing the problems on the power down of the
LPAR. If the system does not crash, the DLPAR will fail if it is retried until
a platform IPL is performed.
* DEFERRED: A problem was fixed for rare system checkstops triggered by SMP
cable failure or when one of the cables is not properly secured in place.
* A problem was fixed for PLL unlock error with SRC B124E504 causing a
secondary error of PRD Internal Firmware Software Fault with SRC B181E580 and
incorrect FRU call outs.
* A problem was fixed for a Operations Panel hang after using it set LAN
Console as the console type for several iterations. After several iterations,
the op panel may hang with "Function 41" displayed on the op panel. A hot
unplug and plug of the op panel can be used to recover it from the hang.
* A problem was fixed for a SRC reminder that keeps repeating for B150F138
even after a UPIC cable has been repaired or replaced. Without the fix, a
hot-plug of a UPIC cable while the system is running will not get verified as
the cable being fixed until the system is re-IPLed, so the initial error SRC
for the missing or bad UPIC cable will be posted repeatedly until the re-IPL
occurs.
* A problem was fixed for a bad SMP cable causing a B114DA62 SRC to be logged
without a correct FRU call out. Without the fix, IBM support may be needed to
isolate to the bad part that needs to be replaced.
* A problem was fixed for Novalink failing to activate partitions that have
names with character lengths near the maximum allowed character length. This
problem can be circumvented by changing the partition name to have 32
characters or less.
* A problem was fixed for a possible system crash with SRC B7000103 if the
HMC session is closed while the performance monitor is active. As a
circumvention for this problem, make sure the performance monitor is turned off
before closing the HMC sessions.
* A problem was fixed a Live Partition Mobility (LPM) migration of a large
memory partition to a target system that causes the target system to crash and
for the HMC to go to the "Incomplete" state. For servers with the default LMB
size (256MB), if partition is >=16TB and if desired memory is different than
the maximum memory, LPM may fail on the target system. Servers with LMB sizes
less than the default could hit this problem with smaller memory partition
sizes. A circumvention to the problem is to set the desired and maximum memory
to the same value for the large memory partition that is to be migrated.
* A problem was fixed for an intermittent IPL failure with SRC B181E540
logged with fault signature " ex(n2p1c0) (L2FIR[13]) NCU Powerbus data
timeout". No FRU is called out. The error may be ignored and the re-IPL is
successful. The error occurs very infrequently.
* A problem was fixed for a rare SMP link initialization failure during an
IPL reported with either SRC B114DA62 during IPL or BC14E540 immediately after
IPL. Without the fix, a system re-IPL is required to recover operation of the
SMP cable.
* A problem was fixed for persistent high fan speeds in the system after a
service processor failover. To restore the fans to normal speed without
re-IPLing the system requires the following steps:
1) Use ASMI to perform a soft reset of the backup service processor.
2) When the backup service processor has completed its reset, use the HMC to
do an administrative failover, so that the reset service processor becomes the
primary.
3) Use ASMI to perform a soft reset on the new backup service processor.
When this has completed, system fan speeds should be back to normal.
* A problem was fixed for a rare IPL failure with SRCs BC8A090F and BC702214
logged caused by an overflow of VPD repair data for the processor cores. A
re-IPL of the system should recover from this problem.
* A problem was fixed for an IPL failure after installing DIMMs of different
sizes, causing memory access errors. Without the fix, the memory configuration
should be restored to only use DIMMs of the same size.
* A problem was fixed for certain SR-IOV adapters with the following issues:
1) If the SR-IOV logical port's VLAN ID (PVID) is modified while the logical
port is configured, the adapter will use an incorrect PVID for the Virtual
Function (VF). This problem is rare because most users do not change the PVID
once the logical port is configured, so they will not have the problem.
2) Adapters failing with error1=00007410 and error2=00000000.
This fix updates the adapter firmware to 11.2.211.38 for the following
Feature Codes and CCINs: #EN15/EN16 with CCIN 2CE3, #EN17/EN18 with CCIN 2CE4,
#EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with CCIN 2CC0, and #EN0K/EN0L with CCIN
2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for a memory DIMM plugging rule violation that causes
the IPL to terminate with an error log with RC_GET_MEM_VPD_UNSUPPORTED_CONFIG
IPL that calls out the memory port but has no DIMM call outs and no DIMM
deconfigurations are done. With the fix, the DIMMs that violate the plugging
rules will be deconfigured and the IPL will complete. Without the fix, the
memory configuration should be restored to the prior working configuration to
allow the IPL to be successful.
* A problem was fixed for an initialization failure of certain SR-IOV
adapters when changed into SR-IOV mode. This is an infrequent problem that
most likely can occur following a concurrent firmware update when the adapter
also needs to be updated. This problem affects the SR-IOV adapter with the
following feature codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with
CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3. This
problem can be recovered by removing the adapter from SR-IOV mode and putting
it back in SR-IOV mode, or the system can be re-IPLed.
* A problem was fixed for possible system crash on a logical partition (LPAR)
power off if a DLPAR fails for a PCIe adapter. The DLPAR fail may leave page
table mappings active for the adapter, causing the problems on the power down
of the LPAR. If the system does not crash, the DLPAR may continue to fail if
it is retried until a re-IPL of the system is done.
* A problem was fixed for lost interrupts that could cause device time-outs
or delays in dispatching a program process. This can occur during memory
operations that require a memory relocation for any partition such as mirrored
memory defragmentation done by the HMC optmem command, or memory guarding that
happens as part of memory error recovery during normal operations of the system.
* A problem was fixed for delayed interrupts on a Power9 system following a
Live Partition Mobility operation from a Power7 or Power8 system. The delayed
interrupts could cause device time-outs, program dispatching delays, or other
device problems on the target Power9 system.
* A problem was fixed for processor cores not being able to be used by
dedicated processor partitions if they were DLPAR removed from a dedicated
processor partition. This error can occur if there was a firmware assisted
dump or a Live Partition Mobility (LPM) operation after the DLPAR of the
processor. A re-IPL of the system will recover the processor cores.
* A problem was fixed where a Linux or AIX partition type was incorrectly
reported as unknown. Symptoms include: IBM Cloud Management Console (CMC) not
being able to determine the RPA partition type (Linux/AIX) for partitions that
are not active; and HMC attempts to dynamically add CPU to Linux partitions may
fail with a HSCL1528 error message stating that there are not enough Integrated
Facility for Linux ( IFL) cores for the operation.
* On systems with 16GB huge-pages, a problem was fixed for certain SR-IOV
adapters with all or nearly all memory assigned to them preventing a system
IPL. This affects the SR-IOV adapters with the following feature codes and
CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M
with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3. The problem can be
circumvented by powering off the system and turning off all the huge-page
allocations.
* A problem was fixed for a SR-IOV adapter failure with B400FFxx errors
logged when moving the adapter to shared mode. This is an infrequent race
condition where the adapter is not yet ready for commands and it can also occur
during EEH error recovery for the adapter. This affects the SR-IOV adapters
with the following feature codes and CCINs: #EC2R/EC2S with CCIN 58FA;
#EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with
CCIN 2CF3.
System firmware changes that affect certain systems
* On systems with IBM i partitions, a problem was fixed that was allowing
V7R1 to boot on or be migrated to POWER9 servers. As documented in the System
Software maps for IBM i (
https://www-01.ibm.com/support/docview.wss?uid=ssm1platformibmi), V7R1 IBM i
software is not supported on POWER9 servers.
* On systems with IBM i partitions, a problem was fixed for a LPAR restart
error after a DLPAR of an active adapter was performed and the LPAR was shut
down. A reboot of the system will recover the LPAR so it will start.
* On systems with IBM i partitions, a rare problem was fixed for a failure of
a DLPAR remove of an adapter. In most cases, a retry of the operation will be
successful.
* On systems with an IBM i partition, a problem was fixed for a D-mode IPL
failure when using a USB DVD drive in an IBM 7226 multimedia storage
enclosure. Error logs with SRC BA16010E, B2003110, and/or B200308C can occur.
As a circumvention, an external DVD drive can be used for the D-mode IPL.
VH920_112_101 / FW920.40
08/06/19 Impact: Data Severity: HIPER
New features and functions
* An option was added to the SMS Remote IPL (RIPL) menus to enable or disable
the UDP checksum calculation for any device type. Previously, this checksum
option was only available for logical LAN devices but now it extended to all
types. The default is for the UDP checksum calculation to be done, but if this
calculation causes errors for the device, it can be turned off with the new
option.
System firmware changes that affect all systems
* HIPER/Pervasive: A change was made to fix an intermittent processor
anomaly that may result in issues such as operating system or hypervisor
termination, application segmentation fault, hang, or undetected data
corruption. The only issues observed to date have been operating system or
hypervisor terminations.
* DEFERRED: A problem was fixed for not being able to do a HMC exchange FRU
for the PCIe cassette in P1-C3 if a PCIe to USB conversion card (CCIN 6B6C) is
not installed in P1-C13. In this situation, the P1-C3 location is not provided
in the FRU selection list. An alternative procedure to accomplish the same
task would be to do a exchange FRU on the PCIe adapter P1-C3-C1 in the PCIe
cassette.
* DEFERRED:PARTITION_DEFERRED: A problem was fixed for repeated CPU DLPAR
remove operations by Linux (Ubuntu, SUSE, or RHEL) OSes possibly resulting in a
partition crash. No specific SRCs or error logs are reported. The problem
can occur on any DLPAR CPU remove operation if running on Linux. The
occurrence is intermittent and rare. The partition crash may result in one or
more of the following console messages (in no particular order):
1) Bad kernel stack pointer addr1 at addr2
2) Oops: Bad kernel stack pointer
3) ******* RTAS CALL BUFFER CORRUPTION *******
4) ERROR: Token not supported
This fix does not activate until there is a reboot of the partition.
* A problem was fixed for a concurrent firmware update failure with SRC
B7000AFF logged. This is a rare problem triggered by a power mode change
preceding a concurrent firmware update. To recover from this problem, run the
code update again without any power mode changes.
* A problem was fixed for informational logs flooding the error log if a "Get
Sensor Reading" is not working.
* A problem was fixed for the Advanced System Management Interface (ASMI)
showing an "Unknown" in the Deconfiguration records if a SMP group (SMPGROUP)
unit is guarded. With the fix, "OBUS End Point" will be displayed instead of
"Unknown".
* A problem was fixed for a SMP half-link failure with SRC BC14E540 reported
as a recovered error not creating a service event with a Predictive Error
callout. This made the system vulnerable to a system failure if the other half
link of the SMP cable failed at a later time.
* A problem was fixed for a concurrent firmware hang with SRC B1813450
logged. This is a rare problem triggered by an error or power mode change that
requires a Power Management (PM) Complex Reset. To recover from this problem,
re-IPL the system and it will be running at the target firmware update level.
* A problem was fixed for a user-initiated system dump going to termination
state with the problem occurring on systems with huge amount of memory such as
64TB. This failure causes a loss of debug data for the system dump.
* A problem was fixed for shared processor pools where uncapped shared
processor partitions placed in a pool may not be able to consume all available
processor cycles. The problem may occur when the sum of the allocated
processing units for the pool member partitions equals the maximum processing
units of the pool.
* A problem was fixed for an outage of I/O connected to a single PCIe Host
Bridge (PHB) with a B7006970 SRC logged. With the fix, the rare PHB fault will
have an EEH event detected and recovered by firmware.
* A problem was fixed for partitions becoming unresponsive or the HMC not
being able to communicate with the system after a processor configuration
change or a partition power on and off.
* A problem was fixed for a concurrent firmware update error with SRC
B7000AFF logged. This is a rare problem triggered by an error or power mode
change that requires a Power Management (PM) Complex Reset. To recover from
this problem, re-IPL the system and it will be running at the target firmware
update level.
* A problem was fixed for possible abnormal terminations of programs on
partitions running in POWER7 or POWER8 compatibility mode.
* A problem was fixed for a possible activation code memory conversion
sequence number error when creating a Power Enterprise Pool (PEP) 1.0 Pool for
a set of servers. This can happen if Perm Memory activations were purchased
local to a server but then needed to be converted from Perm MEM to Mobile PEP
Mem state for pool use. The deployment of the PEP fails with the following
messages on the HMC:
1) HSCL9017 HSCL0521 A Mobile CoD memory conversion code to convert 100 GB
of permanently activated memory to Mobile CoD memory on the managed system has
been entered. The sequence number of the CoD code indicates that this code has
been used before. Obtain a new CoD code and try again.
2) HSCL9119 The Mobile CoD memory activation code for the Power enterprise
pool was not entered because a permanent to Mobile CoD memory conversion code
for a server could not be entered.
To recover from this error, request a new XML file from IBM with an updated
Memory Conversion activation code.
* A problem was fixed for a hypervisor hang that can occur on the target side
when doing a Live Partition Mobility (LPM) migration from a system that does
not support encryption and compression of LPM data. If the hang occurs, the
HMC will go to an "Incomplete" state for the target system. The problem is
rare because the data from the source partition must be in a very specific
pattern to cause the fail. When the failure occurs, a B182951C will be logged
on the target (destination) system and the HMC for the source partition will
issue the following message: "HSCLA318 The migration command issued to the
destination management console failed with the following error: HSCLA228 The
requested operation cannot be performed because the managed system is not in the Standby or Operating state.". To recover, the target
system must be re-IPLed.
* A problem was fixed for an initialization failure of an SR-IOV adapter port
during its boot, causing a B400FF02 SRC to be logged. This is a rare problem
and it recovers automatically by the reboot of the adapter on the error.
* A problem was fixed for SR-IOV adapter Virtual Functions (VFs) that can
fail to restore to their configuration after a low-level EEH error, causing
loss of function for the adapter. This problem can occur if the other than the
default NIC VF configuration was selected when the VF was created. The problem
will occur all the time for VFs configured as RDMA over Converged Ethernet
(RoCE) but much less frequent and intermittent for other non-default VF
configurations.
* A problem was fixed which caused network traffic failures for Virtual
Functions (VFs) operating in non-promiscuous multicast mode. In
non-promiscuous mode, when a VF receives a frame, it will drop it unless the
frame is addressed to the VF's MAC address, or is a broadcast or multicast
addressed frame. With the problem, the VF drops the frame even though it is
multicast, thereby blocking the network traffic, which can result in ping
failures and impact other network operations. To recover from the issue, turn
multicast promiscuous on. This may cause some unwanted multicast traffic to
flow to the partition.
* A problem was fixed for a boot failure using a N_PORT ID Virtualization
(NPIV) LUN for an operating system that is installed on a disk of 2 TB or
greater, and having a device driver for the disk that adheres to a non-zero
allocation length requirement for the "READ CAPACITY 16". The IBM partition
firmware had always used an invalid zero allocation length for the return of
data and that had been accepted by previous device drivers. Now some of the
newer device drivers are adhering to the specification and needing an
allocation length of non-zero to allow the boot to proceed.
* A problem was fixed for a possible boot failure from a ISO/IEC 13346
formatted image, also known as Universal Disk Format (UDF).
UDF is a profile of the specification known as ISO/IEC 13346 and is an open
vendor-neutral file system for computer data storage for a broad range of media
such as DVDs and newer optical disc formats. The failure is infrequent and
depends on the image. In rare cases, the boot code erroneously fails to find a
file in the current directory. If the boot fails on a specific image, the boot
of that image will always fail without the fix.
* A problem was fixed for broadcast bootp installs or boots that fail with a
UDP checksum error.
* A problem was fixed for failing to boot from an AIX mksysb backup on a USB
RDX drive with SRCs logged of BA210012, AA06000D, and BA090010. The boot error
does not occur if a serial console is used to navigate the SMS menus.
* A problem was fixed for possible loss of mainstore memory dump data for
system termination errors.
* A problem was fixed for an intermittent IPL failure with B181345A,
B150BA22, BC131705, BC8A1705, or BC81703 logged with a processor core called
out. This is a rare error and does not have a real hardware fault, so the
processor core can be unguarded and used again on the next IPL.
* A problem was fixed for two false UE SRCs of B1815285 and B1702A03 possibly
being logged on the first IPL of a 2-node system. A VPD timing error can cause
a 2-node system to be misread as a 4-node, causing the false SRCs. This can
only occur on the first IPL of the system.
* A problem was fixed for a processor core fault in the early stages of the
IPL that causes the service processor to terminate. With the fix, the system
is reconfigured to remove the bad core and the system is IPLed with the
remaining processor cores.
* A problem was fixed for a drift in the system time (time lags and the clock
runs slower than the true value of time) that occurs when the system is powered
off to the service processor standby state. To recover from this problem, the
system time must be manually corrected using the Advanced System Management
Interface (ASMI) before powering on the system. The time lag increases in
proportion to the duration of time that the system is powered off.
* A problem was fixed for a local clock card (LCC) failure that results in a
failed service processor failover and a system that does not IPL or takes
several hours to IPL. With the fix, missing local clock card data is made
available to the backup service processor so that the failover can succeed,
allowing the system to IPL.
* A problem was fixed for hypervisor tasks getting deadlocked that cause the
hypervisor to be unresponsive to the HMC ( this shows as an incomplete state on
the HMC) with SRC B200F011 logged. This is a rare timing error. With this
problem, OS workloads will continue to run but it will not be possible for the
HMC to interact with the partitions. This error can be recovered by doing a
re-IPL of the system with a scheduled outage.
* A problem was fixed for eight or more simultaneous Live Partition Mobility
(LPM) migrations to the same system possibly failing in validation with the HMC
error message of "HSCL0273 A command that was targeted to the managed system
has timed out". The problem can be circumvented by doing the LPM migrations to
the same system in smaller batches.
* A problem was fixed for a system IPLing with an invalid time set on the
service processor that causes partitions to be reset to the Epoch date of
01/01/1970. With the fix, on the IPL, the hypervisor logs a B700120x when the
service processor real time clock is found to be invalid and halts the IPL to
allow the time and date to be corrected by the user. The Advanced System
Management Interface (ASMI) can be used to correct the time and date on the
service processor. On the next IPL, if the time and date have not been
corrected, the hypervisor will log a SRC B7001224 (indicating the user was
warned on the last IPL) but allow the partitions to start, but the time and
date will be set to the Epoch value.
* A problem was fixed for the Advanced System Management Interface (ASMI)
menu for "PCIe Hardware Topology/Reset link" showing the wrong value. This
value is always wrong without the fix.
* A problem was fixed for SR-IOV adapters to provide a consistent
Informational message level for cable plugging issues. For transceivers not
plugged on certain SR-IOV adapters, an unrecoverable error (UE) SRC B400FF03
was changed to an Informational message logged. This affects the SR-IOV
adapters with the following feature codes: EC2R, EC2S, EC2T, EC2U, and EC3L.
For copper cables unplugged on certain SR-IOV adapters, a missing message was
replaced with an Informational message logged. This affects the SR-IOV
adapters with the following feature codes: EN17, EN0K, and EN0L.
* A problem was fixed for a drift in the system time (time lags and the clock
runs slower than the true value of time) that occurs when the system is powered
off to the service processor standby state. To recover from this problem, the
system time must be manually corrected using the Advanced System Management
Interface (ASMI) before powering on the system. The time lag increases in
proportion to the duration of time that the system is powered off.
* A problem was fixed for incorrect Centaur DIMM callouts for DIMM over
temperature errors. The error log for the DIMM over temperature will have
incorrect FRU callouts, either calling out the wrong DIMM or the wrong Centaur
memory buffer.
System firmware changes that affect certain systems
* On systems with PCIe3 expansion drawers(feature code #EMX0), a problem was
fixed for a concurrent exchange of a PCIe expansion drawer cable card, although
successful, leaves the fault LED turned on.
* On systems with IBMi partitions, a problem was fixed for Live Partition
Mobility (LPM) migrations that could have incorrect hardware resource
information (related to VPD) in the target partition if a failover had occurred
for the source partition during the migration. This failover would have to
occur during the Suspended state of the migration, which only lasts about a
second, so this should be rare. With the fix, at a minimum the migration error
will be detected to abort the migration so it can be restarted. And at a later
IBMi OS level, the fix will allow the migration to complete even though the
failover has occurred during the Suspended state of the migration.
* On systems running IBM i partitions, a problem was fixed for IBM i
collection services that may produce incorrect instruction count results.
* On systems using Utility COD, a problem was fixed for "Shared Processor
Utilization Data" showing a too-large number of Non-Utility processors, much
more than even installed. This incorrect information can prevent the billing
for the use of the Utility Processors.
VH920_101_101 / FW920.30
03/08/19 Impact: Data Severity: HIPER
New features and functions
* The Operations Panel was enhanced to display "Disruptive" warning for
control panel operations that would disturb a running system. For example,
control panel function "03" is used to re-IPL the system and would get the
warning message to alert the operator that the system could be impacted.
* A new SRC of B7006A74 was added for PHB LEM 62 errors that had surpassed a
threshold in the path of the #EMX0 expansion drawer. This replaces the SRC
B7006A72 to have a correct callout list. Without the feature, when B7006A72 is
logged against a PCIe slot in the CEC containing a cable card, the FRUs in the
full #EMX0 expansion drawer path should be considered (use the B7006A8B FRU
callout list as a reference).
System firmware changes that affect all systems
* HIPER/Pervasive: DISRUPTIVE: A problem was fixed where, under certain
conditions, a Power Management Reset (PM Reset) event may result in undetected
data corruption. PM Resets occur under various scenarios such as SMP cable
failures, power management mode changes between Dynamic Performance and Maximum
Performance, Concurrent FW updates, power management controller recovery
procedures, or system boot.
* DEFERRED: A problem was fixed to reduce the frequency of PCIe errors from
the processors to the NVME drives during the IPL and run-time operations. This
fix requires an IPL to activate it after it is applied. The errors being
reduced occur very infrequently, so it is not urgent to re-IPL the system to
activate the fix if the system is running without errors.
* A problem was fixed for a loss of service processor redundancy if an
attempt is made to boot from a corrupted flash side on the primary service
processor. Although the primary service processor recovers, the backup service
processor ends up stuck in the IPLing state. The backup service processor must
be reset to recover from the IPL hang and restore service processor redundancy.
* A problem was fixed for not being able to concurrently add the PCIe to USB
conversion card with CCIN 6B6C. The Vital Product Data (VPD )for the new FRU
is not updated into the system, so the added part is not functional until the
system is re-IPLed.
* A problem was fixed for a system mis-configured with a mix of DDR3 and DDR4
DIMMs in the same node failing without callouts for the problem DIMMs. The
system fails with SRC B181BAD4. With the fix, for a system with multiple
nodes, the node with the DIMM mix will fail and be guarded but the other nodes
are able to IPL. And an SRC is logged that provides a list of the problem
DIMMs in the failed node so they can be guarded or physically removed.
* A problem was fixed for a service processor missing or failed T3 cable
causing the IPL to fail instead of a recovering with a failover to the backup
service processor.
* A problem was fixed for failed hardware such as a clock card causing the
service processor to have slow performance. This might be seen if a hardware
problem occurs and the service processor appears to be hanging while error logs
are collected.
* A problem was fixed for an IPL failing with B7000103 if there is an error
in a PCIe Hub (PHB). With the fix, the IPL is allowed to complete but there
may be failed I/O adapters if the errant PHB is populated with PCIe adapters.
* A problem was fixed for hypervisor task getting deadlocked if partitions
are powered on at the same time that SR-IOV is being configured for an
adapter. With this problem, workloads will continue to run but it will not be
possible to change the virtualization configuration or power partitions on and
off. This error can be recovered by doing a re-IPL of the system.
* A problem was fixed for I/O adapters not recovering from low-level EEH
errors, resulting in a Permanent EEH error with SRC B7006971 logged. These
errors can occur during memory relocation in parallel with heavy I/O traffic,
The affected adapters can be recovered by a re-IPL of the system.
* A problem was fixed for the an unexpected Core Watchdog error during a
reset of the service processor with a SRC B150B901 logged . With enough
service processor resets in a row, it is possible for the service processor to
go to a failed state with SRC B1817212 on systems with a single service
processor. On systems with redundant service processors, the failed service
processor would get guarded with a B151E6D0 or B152E6D0 SRC depending on which
service processor fails. The hypervisor and the partition workloads would
continue to run in these cases of failed service processors.
* A problem was fixed for an intermittent IPL failure with BC131705 and
BC8A1703 logged with a processor core called out. This is a rare error and
does not have a real hardware fault, so the processor core can be unguarded and
used again on the next IPL.
* A problem was fixed for DDR4 2933 MHZ and 3200 MHZ DIMMs not defaulting to
the 2666 MHZ speed on a new DIMM plug, thus preventing the system from IPLing.
* A problem was fixed for extra error logs that can occur during Host
Initiated Failover for SRC B1504803 and a variety of power-related SRCs such as
11007611, 11007621, and 11007631. These extra error logs can be ignored as
they are side-effect of the service processor failover and not new errors.
* A problem was fixed for the Call Home menu option being displayed in the
Advanced System Management Interface (ASMI). This function is not valid for
this system and should not be shown.
* A problem was fixed for the Advanced System Management Interface SMP cable
operation repair using the Chrome browser. If the "Continue" button in the
menu is pressed multiple times become the SMP operation completes, the
operation will fail with an SRC of B111BA24 logged. To circumvent this
problem, the user should only click on the "Continue" button once and wait
until the result of the operation is displayed.
* A problem was fixed for a PCIe Hub checkstop with SRC B138E504 logged that
fails to guard the errant processor chip. With the fix, the problem hardware
FRU is guarded so there is not a recurrence of the error on the next IPL.
* A problem was fixed for a VRM error for a Self Boot Engine (SBE) that
caused the system to go to terminate state after the error rather than
re-IPLing to run-time. A re-IPL will recover the system.
* A problem was fixed for a boot device hang, leading to a long time-out
condition before the service processor gives up. This problem has a very low
frequency and a re-IPL is normally successful to recover the system.
* A problem was fixed for deconfigured FRUs that showed as Unit Type of
"Unknown" in the Advanced System Management Interface (ASMI). The following
FRU type names will be displayed if deconfigured (shown here is a description
of the FRU type as well):
DMI: Processor to Memory Buffer Interface
MC: Memory Controller
MFREFCLK: Multi Function Reference Clock
MFREFCLKENDPT: Muti function reference clock end point
MI: Processor to Memory Buffer Interface
NPU: Nvidia Processing Unit
OBUS_BRICK: OBUS
SYSREFCLKENDPT: System reference clock end point
TPM: Trusted Platform Module
* A problem was fixed for shared processor partitions going unresponsive
after changing the processor sharing mode of a dedicated processor partition
from "allow when partition is active" to either "allow when partition is
inactive" or "never". This problem can be circumvented by avoiding disabling
processor sharing when active on a dedicated processor partition. To recovery
the partition if the issue has been encountered, enable "processor sharing when
active" for the partition.
* A problem was fixed for hypervisor error logs issued during the IPL missing
the firmware version. This happens on every IPL for logs generated during the
early part of the IPL.
* A problem was fixed for a continuous logging of B7006A28 SRCs after the
threshold limit of PCIe Advanced Error Reporting (AER) correctable errors. The
error log flooding can cause error buffer wrapping and other performance issues.
* A problem was fixed for an error in deleting a partition with the
virtualized Trusted Platform Module (vTPM) enabled and SRC B7000602 logged.
When this error occurs, the encryption process in the hypervisor may become
unusable. The problem can be recovered from with a re-IPL of the system.
* A problem was fixed in Live Partition Mobility (LPM) of a partition to a
shared processor pool, which results in the partition being unable to consume
uncapped cycles on the target system. To prevent the issue from occurring,
partitions can be migrated to the default shared processor pool and then
dynamically moved to the desired shared processor pool. To recover from the
issue, use DLPAR to add or remove a virtual processor to/from the affected
partition, dynamically move the partition between shared processor pools,
reboot the partition, or re-IPL the system.
* A problem was fixed for informational (INF) errors for the PCIe Hub (PHB)
at a threshold limit causing the I/O slots to go non-operational. The system
I/O can be recovered with a re-IPL.
* A problem was fixed for the HMC in some instances reporting a VIOS
partition as an AIX partition. The VIOS partition can be used correctly even
when it is misidentified.
* A problem was fixed for errors in the PHB performance counters collected by
the 24x7 performance monitor.
* A problem was fixed for certain SR-IOV adapters where SRC B400FF01 errors
are seen during configuration of the adapter into SR-IOV mode or updating
adapter firmware.
This fix updates the adapter firmware to 11.2.211.37 for the following
Feature Codes: EN15, EN17, EN0H, EN0J, EN0M, EN0N, EN0K, and EN0L.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for an intermittent failure of a PCIe adapter during an
IPL. This problem only happens rarely and a re-IPL of the system will recover
the PCIe adapter.
* A problem was fixed for a system terminating if there was even one
predictive or recoverable SRC. For this problem, all hardware SRCs logged are
treated as terminating SRCs. For this behavior to occur, the initial service
processor boot from the AC power off state failed to complete cleanly, instead
triggering an internal reset (a rare error), leaving some parts of the service
processor not initialized. This problem can be recovered by doing an AC power
cycle, or concurrently on an active system with the assistance of IBM support.
* A security problem was fixed in the service processor OpenSSL support that
could cause secured sockets to hang, disrupting HMC communications for system
management and partition operations. The Common Vulnerabilities and Exposures
issue number is CVE-2018-0732.
* A security problem was fixed in the service processor Network Security
Services (NSS) services which, with a man-in-the-middle attack, could provide
false completion or errant network transactions or exposure of sensitive data
from intercepted SSL connections to ASMI, Redfish, or the service processor
message server. The Common Vulnerabilities and Exposures issue number is
CVE-2018-12384.
* A security problem was fixed in the service processor TCP stack that would
allow a Denial of Service (DOS) attack with TCP packets modified to trigger
time and calculation expensive calls. By sending specially modified packets
within ongoing TCP sessions with the Management Consoles, this could lead to a
CPU saturation and possible reset and termination of the service processor.
The Common Vulnerabilities and Exposures issue number is CVE-2018-5390.
* A security problem was fixed in the service processor TCP stack that would
allow a Denial of Service (DOS) attack by allowing very large IP fragments to
trigger time and calculation expensive calls in packet reassembly. This could
lead to a CPU saturation and possible reset and termination of the service
processor. The Common Vulnerabilities and Exposures issue number is
CVE-2018-5391. With the fix, changes were made to lower the IP fragment
thresholds to invalidate the attack.
System firmware changes that affect certain systems
* DEFERRED: On IBM Power System E980 (9080-M9S) systems with three or four
nodes, a problem with more frequent than necessary L2 cache memory flushes was
fixed to improve system performance for some workloads. This problem could be
noticed for workloads that are cache bound (where speed of cache access is an
important factor in determining the speed at which the program gets executed).
For example, if the most visited part of a program is a small section of code
inside a loop small enough to be contained within the cache, then the program
may be cache bound.
* DEFERRED: On IBM Power System E980 (9080-M9S) systems with one or two
nodes, a problem with slower than expected L2 cache memory update response was
fixed to improve system performance for some workloads. The slowdown was
triggered by many concurrent processor threads trying to update the L2 cache
memory atomicallly with a Power LARX/STCX instruction sequence. Without the
fix, the rate that the system could do these atomic updates was slower than the
normal L2 cache response which could cause the system overall performance to
decrease. This problem could be noticed for workloads that are cache bound
(where speed of cache access is an important factor in determining the speed at
which the program gets executed). For example, if the most visited part of a
program is a small section of code inside a loop small enough to be contained
within the cache, then the program may be cache bound.
* On systems with an IBM i partition with greater than 9999 GB installed, a
problem was fixed for on/Off COD memory-related amounts not being displayed
correctly. This only happens when retrieving the On/Off COD numbers via a
particular IBMi MATMATR MI command option value.
VH920_089_075 / FW920.24
02/12/19 Impact: Performance Severity: SPE
New Features and Functions
* Support for up to 8 production SAP HANA LPARs and 64 TB of memory.
System firmware changes that affect all systems
* A problem was fixed for a concurrent firmware update that could hang during
the firmware activation, resulting in the system entering into Power safe
mode. The system can be recovered by doing a re-IPL of the system with a power
down and power up. A concurrent remove of this fix to the firmware level
FW920.22 will fail with the hang, so moving back to this level should only be
done with a disruptive firmware update.
* A problem was fixed where installing a partition with a NIM server may fail
when using an SR-IOV adapter with a Port VLAN ID (PVID) configured. This error
is a regression problem introduced in the 11.2.211.32 adapter firmware. This
fix reverts the adapter firmware back to 11.2.211.29 for the following Feature
Codes: EN15, EN16 EN17, EN18, EN0H, EN0J, EN0K, and EN0L. Because the
adapter firmware is reverted to the prior version, all changes included in the
11.2.211.32 are reverted as well. Circumvention options for this problem can
be found at the following link:
http://www.ibm.com/support/docview.wss?uid=ibm10794153.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
System firmware changes that affect certain systems
* On IBM Power System E980 (9080-M9S) systems with three or four nodes, a
problem with slower than expected L2 cache memory update response was fixed to
improve system performance for some workloads. The slowdown was triggered by
many concurrent processor threads trying to update the L2 cache memory
atomicallly with a Power LARX/STCX instruction sequence. Without the fix, the
rate that the system could do these atomic updates was slower than the normal
L2 cache response which could cause the system overall performance to
decrease. This problem could be noticed for workloads that are cache bound
(where speed of cache access is an important factor in determining the speed at
which the program gets executed). For example, if the most visited part of a
program is a small section of code inside a loop small enough to be contained
within the cache, then the program may be cache bound.
VH920_080_075 / FW920.22
12/13/18 Impact: Availability Severity: SPE
System firmware changes that affect all systems
* A problem was fixed for an intermittent IPL failure with SRCs B150BA40 and
B181BA24 logged. The system can be recovered by IPLing again. The failure is
caused by a memory buffer misalignment, so it represents a transient fault that
should occur only rarely.
* A problem was fixed for intermittent PCIe correctable errors which would
eventually threshold and cause SRC B7006A72 to be logged. PCIe performance
degradation or temporary loss of one or more PCIe IO slots could also occur
resulting in SRCs B7006970 or B7006971. System firmware changes that affect
certain systems
* On a system with two or more nodes, a problem was fixed for a SMP cable
pull or failure that could cause a system checkstop with SRC BC14E540 logged.
This problem is limited to the SMP cables that are in the TOD topology
propagation path.
VH920_078_075 / FW920.21
11/28/18 Impact: Availability Severity: SPE System firmware changes
that affect all systems
* DEFERRED: A problem was fixed to further increase the Vio voltage level
to the processors to protect against lower voltage level and noise margins that
could result in degraded performance or loss of processor cores in rare
instances. The Vio voltage level was previously adjusted in FW920.20 but an
additional increase of the voltage was needed to improve processor reliability.
VH920_075_075 / FW920.20
11/16/18 Impact: Data Severity: HIPER New features and
functions
* Support was added for three and four node configurations of the IBM Power
System E980 (9080-M9S).
* Support was added for concurrent maintenance of SMP cables.
* Support was enabled for eRepair spare lane deployment for fabric and memory
buses.
* Support was added for run-time recovery of clock cards that have failed
because of a loss of lock condition. The Phased Lock Loop (PLL) 842S02 loses
lock occasionally, but this should be a fully recoverable condition. Without
this support, the disabled clock card would have to be replaced with the system
powered down.
* Support was added for processor clock failover.
* Support was added for Multi-Function clock card failover.
System firmware changes that affect all systems
* HIPER/Non-Pervasive: DISRUPTIVE: Fixes included to address potential
scenarios that could result in undetected data corruption, system hangs, or
system terminations.
* DISRUPTIVE: A problem was fixed for PCIe and SAS adapters in slots
attached to a PLX (PCIe switch) failing to initialize and not being found by
the Operating System. The problem should not occur on the first IPL after an
AC power cycle, but subsequent IPLs may experience the problem.
* DEFERRED: A problem was fixed for a possible system slow-down with many
BC10E504 SRCs logged for Power Bus hang recovery. This could occur during
periods of very high activity for memory write operations which inundate a
specific memory controller. This slow-down is localized to a specific region
of real memory that is mapped to memory DIMMs associated with the congested
memory controller.
* DEFERRED: A problem was fixed for a logical partition (LPAR) running
slower than expected because of an overloaded ABUS socket for its SMP
connection. This fix requires a re-IPL of the system to balance the
distribution of the LPARs to the ABUS sockets.
* DEFERRED: A problem was fixed for a PCIe clock failure in the PCIe3 I/O
expansion drawer (feature #EMX0), causing loss of PCIe slots. The system must
be re-IPLed for the fix to activate.
* DEFERRED: A problem was fixed to increase the Vio voltage level to the
processors to protect against lower voltage level and noise margins that could
result in degraded performance or loss of processor cores in rare instances.
* DEFERRED: A problem was fixed for a possible system hang in the early boot
stage. This could occur during periods of very high activity for memory read
operations which deplete all read buffers, hanging an internal process that
requires a read buffer, With the fix, a congested memory controller can stall
the read pipeline to make a read buffer available for the internal processes.
* DEFERRED: A problem was fixed for concurrent maintenance operations for
PCIe expansion drawer cable cards and PCI adapters that could cause loss of
system hardware information in the hypervisor with these side effects: 1)
partition secure boots could fail with SRC BA540100 logged.; 2) Live Partition
Mobility (LPM) migrations could be blocked; 3) SR-IOV adapters could be blocked
from going into shared mode; 4) Power Management services could be lost; and 5)
warm re-IPLs of the system can fail. The system can be recovered by powering
off and then IPLing again.
* A problem was fixed for memory DIMM temperatures reported with an incorrect
FRU callout. The error happens for only certain memory configurations.
* A problem was fixed for in the Dynamic Platform Organizer (DPO) for
calculating the amount of memory required for partitions with Virtual Page
Tables that are greater than 1 LMB in size. This causes affinity scores which
are not correct for the partition.
* A problem was fixed for an unhelpful error message of "HSCL1473 Cannot
execute atomic operation. Atomic operations are not enabled." that is displayed
on the HMC if there are no licensed processors available for the boot of a
partition.
* A problem was fixed for power supply non-power faults being logged
indicating a need for power supply replacement when the service processor is at
a high activity level, but the power supply is working correctly. This service
processor performance problem may also prevent legitimate power supply faults
from being logged, with other communication and non-power faults being logged
instead.
* A problem was fixed for FSP cable failures with B150492F logged that have
the wrong FSP cable called out.
* A problem was fixed for a false extra Predictive Error of B1xxE550 that can
occur for a node with a recoverable event if Hostboot is terminating on a
different node at the same time. The Predictive Error log will not have a
call-out and can be ignored.
* A problem was fixed for a memory channel failure due to a RCD parity error
calling out the affected DIMMs correctly, but also falsely calling out either
the memory controller or a processor, or both.
* A problem was fixed for adapters in slots attached to a PLX (PCIe switch)
failing with SRCs B7006970 and BA188002 when a second and subsequent errors on
the PLX failed to initiate PLX recovery. For this infrequent problem to occur,
it requires a second error on the PLX after recovery from the first error.
* A problem was fixed for a processor core checkstop that would deconfigure
two cores: the failed core and a working core. The bad core is determined by
matching to the error log and the false bad core will have no error log. To
recover the loss of the good core, the guard can be cleared on core that does
not have an associated error log.
* A problem was fixed for the system going into Safe Mode after a run-time
deconfiguration of a processor core, resulting in slower performance. For
this problem to occur, there must be a second fault in the Power Management
complex after the processor core has been deconfigured.
* A problem was fixed for service processor resets confusing the wakeup state
of processor cores, resulting in degraded cores that cannot be managed for
power usage. This will result in the system consuming more power, but also
running slower due to the inability to make use of WOF optimizations around the
cores. The degraded processor cores can be recovered by a re-IPL of the system.
* A problem was fixed for incorrect Resource Identification (RID) numbers for
the Analog Power Subsystem Sweep (APSS) chip, used by OCC to tune the processor
frequency. Any error call-out on the APSS may call out the wrong APSS.
* A problem was fixed for the On-Chip Controller (OCC) MAX memory bandwidth
sensor sometimes having values that are too high.
* A problem was fixed for DDR4 memory training in the IPL to improve the DDR4
write margin. Lesser write margins can potentially cause memory errors.
* A problem was fixed for a system failure with SRC B700F103 that can occur
if a shared-mode SR-IOV adapter is moved from a high-performance slot to a
lower performance slot. This problem can be avoided by disabling shared mode
on the SR-IOV adapter; moving the adapter; and then re-enabling shared mode.
* A problem was fixed for the system going to Safe Mode if all the cores of a
processor are lost at run-time.
* A problem was fixed for a Core Management Engine (CME) fault causing a
system failure with SRC B700F105 if processor cores had been guarded during the
IPL.
* A problem was fixed for a Core Management Engine (CME) fault that could
result in a system checkstop.
* A problem was fixed for a missing error log for the case of the TPM card
not being detected when it is required for a trusted boot.
* A problem was fixed for a flood of BC130311 SRCs that could occur when
changing Energy Scale Power settings, if the Power Management is in a reset
loop because of errors.
* A problem was fixed for coherent accelerator processor proxy (CAPP) unit
errors being called out as CEC hardware Subsystem instead of PROCESSOR_UNIT.
* A problem was fixed for a repeating failover for the service processors if
a TPM card had failed and been replaced. The TPM update was not synchronized
to the backup service processor, creating a situation where a service processor
failover could fail and retry because of mismatched TPM capabilities.
* A problem was fixed for an incorrect processor callout on a memory channel
error that causes a CHIFIR[61] checkstop on the processor.
* A problem was fixed for a Logical LAN (l-lan) device failing to boot when
there is a UDP packet checksum error. With the fix, there is a new option when
configuring a l-lan port in SMS to enable or disable the UDP checksum
validation. If the adapter is already providing the checksum validation, then
the l-lan port needs to have its validation disabled.
* A problem was fixed for a power fault causing a power guard of a node, but
leaving the node configured. This can cause redundant logging of errors as
operations are continued to be attempted on failed hardware.
* A problem was fixed for missing error logs for hardware faults if the
hypervisor terminates before the faults can be processed. With the fix, the
hardware attentions for the bad FRUs will get handled, prior to processing the
termination of the hypervisor.
* A problem was fixed for the diagnostics for a system boot checkstop failing
to isolate to the bad FRU if it occurred on a non-master processor or a memory
chip connected to a non-master processor. With the fix, the fault attentions
from a non-master processor are properly isolated to the failing chip so it can
be guarded or recovered as needed to allow the IPL to continue.
* A problem was fixed for Hostboot error log IDs (EID) getting reused from
one IPL to the next, resulting in error logs getting suppressed (missing) for
new problems on the subsequent IPLs if they have a re-used EID that was already
present in the service processor error logs.
* A problem was fixed so the green USB active LED is lit for the service
processor that is in the primary role. Without the fix, the green LED is
always lit for the service processor in the C3 position which is FSP-A,
regardless of the role of the service processor.
* A problem was fixed for Live Partition Mobility (LPM) partition migration
to preserve the Secure Boot setting on the target partition. Secure Boot is
supported in FW920 and later partitions. If the Secure Boot setting is
non-zero for the partition, it will zero after the migration.
* A problem was fixed for an SR-IOV adapter using the wrong Port VLAN ID
(PVID) for a logical port (VF) when its non-zero PVID could be changed
following a network install using the logical port.
This fix updates adapter firmware to 11.2.211.32 for the following Feature
Codes: EN15, EN17, EN0H, EN0J, EN0M, EN0N, EN0K, and EN0L.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for a SMS ping failure for a SR-IOV adapter VF with a
non-zero Port VLAN ID (PVID). This failure may occur after the partition with
the adapter has been booted to AIX, and then rebooted back to SMS. Without the
fix, residue information from the AIX boot is retained for the VF that should
have been cleared.
* A problem was fixed for a SR-IOV adapter vNIC configuration error that did
not provide a proper SRC to help resolve the issue of the boot device not
pinging in SMS due to maximum transmission unit (MTU) size mismatch in the
configuration. The use of a vNIC backing device does not allow configuring VFs
for jumbo frames when the Partition Firmware configuration for the adapter (as
specified on the HMC) does not support jumbo frames. When this happens, the
vNIC adapter will fail to ping in SMS and thus cannot be used as a boot
device. With the fix, the vNIC driver configuration code is now checking the
vNIC login (open) return code so it can issue an SRC when the open fails for a
MTU issue (such as jumbo frame mismatch) or for some other reason. A jumbo
frame is an Ethernet frame with a payload greater than the standard MTU of
1,500 bytes and can be as large as 9,000 bytes.
* A problem was fixed for three bad lanes causing a memory channel fail on
the DMI interface. With the fix, the errors on the third lane on the DMI
interface will be recovered and it will continue to be used as long as it
functions.
* A problem was fixed for Power Management errors occurring at higher ambient
temperatures that should instead be handled by dynamic adjustments to the
processor voltages and frequencies. The Power Management errors can cause the
system to drop into Sate Mode which will provide lower performance until the
system is re-IPLed.
* A problem was fixed for preventing loss of function on an SR-IOV adapter
with an 8MB adapter firmware image if it is placed into SR-IOV shared mode.
The 8MB image is not supported at the FW920.20 firmware level. With the fix,
the adapter with the 8MB image is rejected with an error without an attempt to
load the older 4MB image on the adapter which could damage it. This problem
affects the following SR-IOV adapters: #EC2R/#EC2S with CCIN 58FA; and
#EC2T/#EC2U with CCIN 58FB.
* A problem was fixed for incorrect recovery from a service processor mailbox
error that was causing the system IPL to fail with the loss of all the PCIe
links. If this occurs, the system will normally re-IPL successfully.
* A problem was fixed for SR-IOV adapter failures when running in shared mode
in a Huge Dynamic DMA Window (HDDW) slot. I/O slots are enabled with HDDW by
using the I/O Adapter Enlarged Capacity setting in the Advanced System
Management Interface (ASMI). This problem can be circumvented by moving the
SR-IOV adapter to a non-HDDW slot, or alternatively, disabling HDDW on the
system.
* A problem was fixed for system termination for a re-IPL with power on with
SRC B181E540 logged. The system can be recovered by powering off and then
IPLing. This problem occurs infrequently and can be avoided by powering off
the system between IPLs.
* A problem was fixed for bad DDR4 DIMMs that fail initialization but are not
guarded or called out, causing a system node to fail during a IPL.
System firmware changes that affect certain systems
* For a shared memory partition, a problem was fixed for Live Partition
Mobility (LPM) migration hang after a Mover Service Partition (MSP) failover in
the early part of the migration. To recover from the hang, a migration stop
command must be given on the HMC. Then the migration can be retried.
* For a shared memory partition, a problem was fixed for Live Partition
Mobility (LPM) migration failure to an indeterminate state. This can occur if
the Mover Service Partition (MSP) has a failover that occurs when the migrating
partition is in the state of "Suspended." To recover from this problem, the
partition must be shutdown and restarted.
* A problem was fixed for an AIX partition not showing the location codes for
the USB controller and ports T1 and T2. When displaying location codes with
OS commands or at the SMS menus, the location of the USB controller (C13) is
missing and ports T1 and T2 are swapped.
* On a system with a Cloud Management Console and a HMC Cloud Connector, a
problem was fixed for memory leaks in the Redfish server causing Out of Memory
(OOM) resets of the service processor.
* On a system with a partition with dedicated processors that are set to
allow processor sharing with "Allow when partition is active" or "Allow
always", a problem was fixed for a potential system hang if the partition is
booting or shutting down while Dynamic Platform Optimizer (DPO) is running. As
a work-around to the problem, the processor sharing can be turned off before
running DPO, or avoid starting or shutting down dedicated partitions with
processor sharing while DPO is active.
* On a system with an AMS partition, a problem was fixed for a Live Partition
Mobility (LPM) migration failure when migrating from P9 to a pre-FW860 P8 or P7
system. This failure can occur if the P9 partition is in dedicated memory
mode, and the Physical Page Table (PPT) ratio is explicitly set on the HMC
(rather than keeping the default value) and the partition is then transitioned
to Active Memory Sharing (AMS) mode prior to the migration to the older
system. This problem can be avoided by using dedicated memory in the partition
being migrated back to the older system.
VH920_057_057 / FW920.10
09/21/18 Impact: New Severity: New
New Features and Functions
* GA Level
4.0 How to Determine The Currently Installed Firmware Level
You can view the server's current firmware level on the Advanced System
Management Interface (ASMI) Welcome pane. It appears in the top right corner.
Example: VH920_123.
----------------------------------------------------------------------------------
5.0 Downloading the Firmware Package
Follow the instructions on Fix Central. You must read and agree to the
license agreement to obtain the firmware packages.
Note: If your HMC is not internet-connected you will need to download the new
firmware level to a USB flash memory device or ftp server.
----------------------------------------------------------------------------------
6.0 Installing the Firmware
The method used to install new firmware will depend on the release level of
firmware which is currently installed on your server. The release level can be
determined by the prefix of the new firmware's filename.Example: VHxxx_yyy_zzz
Where xxx = release level
* If the release level will stay the same (Example: Level VH920_040_040 is
currently installed and you are attempting to install level VH920_041_040) this
is considered an update.
* If the release level will change (Example: Level VH900_040_040 is currently
installed and you are attempting to install level VH920_050_050) this is
considered an upgrade. Instructions for installing firmware updates and
upgrades can be found at
https://www.ibm.com/support/knowledgecenter/9080-M9S/p9eh6/p9eh6_updates_sys.htm
IBM i Systems:
For information concerning IBM i Systems, go to the following URL to access
Fix Central:
http://www-933.ibm.com/support/fixcentral/
Choose "Select product", under Product Group specify "System i", under
Product specify "IBM i", then Continue and specify the desired firmware PTF
accordingly.
7.0 Firmware History
The complete Firmware Fix History (including HIPER descriptions) for this
Release level can be reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VH-Firmware-Hist.html