Power9 System Firmware

Applies to:   9080-M9S

This document provides information about the installation of Licensed Machine or Licensed Internal Code, which is sometimes referred to generically as microcode or firmware.


Contents


1.0 Systems Affected

This package provides firmware for Power Systems E980 (9080-M9S) servers only.

The firmware level in this package is:

1.1 Minimum HMC Code Level

This section is intended to describe the "Minimum HMC Code Level" required by the System Firmware to complete the firmware installation process. When installing the System Firmware, the HMC level must be equal to or higher than the "Minimum HMC Code Level" before starting the system firmware update.  If the HMC managing the server targeted for the System Firmware update is running a code level lower than the "Minimum HMC Code Level" the firmware update will not proceed.

The Minimum HMC Code levels for this firmware for HMC x86,  ppc64 or ppc64le are listed below.

x86 -  This term is used to reference the legacy HMC that runs on x86/Intel/AMD  hardware for both the 7042 Machine Type appliances and the Virtual HMC that can run on the Intel hypervisors (KVM, VMWare, Xen).

ppc64 or ppc64le - describes the Linux code that is compiled to run on Power-based servers or LPARS (Logical Partitions)

For information concerning HMC releases and the latest PTFs,  go to the following URL to access Fix Central:
http://www-933.ibm.com/support/fixcentral/

For specific fix level information on key components of IBM Power Systems running the AIX, IBM i and Linux operating systems, we suggest using the Fix Level Recommendation Tool (FLRT):
http://www14.software.ibm.com/webapp/set2/flrt/home


NOTES:

                -You must be logged in as hscroot in order for the firmware installation to complete correctly.
                - Systems Director Management Console (SDMC) does not support this System Firmware level.

2.0 Important Information

Downgrading firmware from any given release level to an earlier release level is not recommended.

If you feel that it is necessary to downgrade the firmware on your system to an earlier release level, please contact your next level of support.

2.1 IPv6 Support and Limitations

IPv6 (Internet Protocol version 6) is supported in the System Management Services (SMS) in this level of system firmware. There are several limitations that should be considered.

When configuring a network interface card (NIC) for remote IPL, only the most recently configured protocol (IPv4 or IPv6) is retained. For example, if the network interface card was previously configured with IPv4 information and is now being configured with IPv6 information, the IPv4 configuration information is discarded.

A single network interface card may only be chosen once for the boot device list. In other words, the interface cannot be configured for the IPv6 protocol and for the IPv4 protocol at the same time.

2.2 Concurrent Firmware Updates

Concurrent system firmware update is supported on HMC Managed Systems only.

2.3 Memory Considerations for Firmware Upgrades

Firmware Release Level upgrades and Service Pack updates may consume additional system memory.
Server firmware requires memory to support the logical partitions on the server. The amount of memory required by the server firmware varies according to several factors.
Factors influencing server firmware memory requirements include the following:
Generally, you can estimate the amount of memory required by server firmware to be approximately 8% of the system installed memory. The actual amount required will generally be less than 8%. However, there are some server models that require an absolute minimum amount of memory for server firmware, regardless of the previously mentioned considerations.

Additional information can be found at:
https://www.ibm.com/support/knowledgecenter/9080-M9S/p9hat/p9hat_lparmemory.htm

2.4 NIM install issue using SR-IOV shared mode at FW 920.20, 920.21, and 920.22

A defect in the adapter firmware for the following Feature Codes: EN15,  EN16,  EN17, EN18, EN0H, EN0J, EN0K, and EN0L was included in IBM Power Server Firmware levels 920.20, 920.21, and 920.22. This defect causes attempts to perform NIM installs using a Virtual Function (VF) to hang or fail.  Circumvention options for this problem can be found at the following link:
http://www.ibm.com/support/docview.wss?uid=ibm10794153


3.0 Firmware Information

Use the following examples as a reference to determine whether your installation will be concurrent or disruptive.

For systems that are not managed by an HMC, the installation of system firmware is always disruptive.

Note: The concurrent levels of system firmware may, on occasion, contain fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can be installed concurrently, but will not be activated until the next IPL. Partition-Deferred fixes can be installed concurrently, but will not be activated until a partition reactivate is performed. Deferred and/or Partition-Deferred fixes, if any, will be identified in the "Firmware Update Descriptions" table of this document. For these types of fixes (Deferred and/or Partition-Deferred) within a service pack, only the fixes in the service pack which cannot be concurrently activated are deferred.

Note: The file names and service pack levels used in the following examples are for clarification only, and are not necessarily levels that have been, or will be released.

System firmware file naming convention:

01VHxxx_yyy_zzz

NOTE: Values of service pack and last disruptive service pack level (yyy and zzz) are only unique within a release level (xxx). For example, 01VH900_040_040 and 01VH910_040_045 are different service packs.

An installation is disruptive if:

            Example: Currently installed release is 01VH900_040_040, new release is 01VH910_050_050.

            Example: VH910_040_040 is disruptive, no matter what level of VH910 is currently installed on the system.

            Example: Currently installed service pack is VH910_040_040 and new service pack is VH910_050_045.

An installation is concurrent if:

The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or higher than the last disruptive service pack level (zzz) of the service pack to be installed.

Example: Currently installed service pack is VH910_040_040, new service pack is VH910_041_040.

3.1 Firmware Information and Description

 
Filename Size Checksum md5sum
01VH920_089_075.rpm 131548999 12280
c07828f6a5fc7dea1596e7ba20aa31d8

Note: The Checksum can be found by running the AIX sum command against the rpm file (only the first 5 digits are listed).
ie: sum 01VH920_089_075.rpm

VH920
For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs

The complete Firmware Fix History for this Release Level can be reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VH-Firmware-Hist.html
VH920_089_075 / FW920.24

02/12/19
Impact:  Performance      Severity:  SPE

New Features and Functions
  • Support for up to 8 production SAP HANA LPARs and 64 TB of memory.
System firmware changes that affect all systems
  • A problem was fixed for a concurrent firmware update that could hang during the firmware activation, resulting in the system entering into Power safe mode.  The system can be recovered by doing a re-IPL of the system with a power down and power up.  A concurrent remove of this fix to the firmware level FW920.22 will fail with the hang, so moving back to this level should only be done with a disruptive firmware update.
  • A problem was fixed where installing a partition with a NIM server may fail when using an SR-IOV adapter with a Port VLAN ID (PVID) configured. This error is a regression problem introduced in the 11.2.211.32 adapter firmware.  This fix reverts the adapter firmware back  to 11.2.211.29  for the following Feature Codes:  EN15, EN16  EN17, EN18, EN0H, EN0J, EN0K, and  EN0L.  Because the adapter firmware is reverted to the prior version, all changes included in the 11.2.211.32 are reverted as well.  Circumvention options for this problem can be found at the following link:  http://www.ibm.com/support/docview.wss?uid=ibm10794153.
    The SR-IOV adapter firmware level update for the shared-mode adapters happens under user control to prevent unexpected temporary outages on the adapters.  A system reboot will update all SR-IOV shared-mode adapters with the new firmware level.  In addition, when an adapter is first set to SR-IOV shared mode, the adapter firmware is updated to the latest level available with the system firmware (and it is also updated automatically during maintenance operations, such as when the adapter is stopped or replaced).  And lastly, selective manual updates of the SR-IOV adapters can be performed using the Hardware Management Console (HMC).  To selectively update the adapter firmware, follow the steps given at the IBM Knowledge Center for using HMC to make the updates:   https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
    Note: Adapters that are capable of running in SR-IOV mode, but are currently running in dedicated mode and assigned to a partition, can be updated concurrently either by the OS that owns the adapter or the managing HMC (if OS is AIX or VIOS and RMC is running).

System firmware changes that affect certain systems

  • On  IBM Power System E980 (9080-M9S) systems with three or four nodes, a problem with slower than expected  L2 cache memory update response was fixed to improve system performance for some workloads.  The slowdown was triggered by many concurrent processor threads trying to update the L2 cache memory atomicallly with a Power LARX/STCX  instruction sequence.  Without the fix, the rate that the system could do these atomic updates was slower than the normal L2 cache response which could cause the system overall performance to decrease.  This problem could be noticed for workloads  that are cache bound (where speed of cache access is an important factor in determining the speed at which the program gets executed). For example, if the most visited part of a program is a small section of code inside a loop small enough to be contained within the cache, then the program may be cache bound.
VH920_080_075 / FW920.22

12/13/18
Impact:  Availability      Severity:  SPE

System firmware changes that affect all systems

  • A problem was fixed for an intermittent IPL failure with SRCs  B150BA40 and B181BA24 logged.  The system can be recovered by IPLing again.  The failure is caused by a memory buffer misalignment, so it represents a transient fault that should occur only rarely.
  • A problem was fixed for intermittent PCIe correctable errors which would eventually threshold and cause SRC B7006A72 to be logged. PCIe performance degradation or temporary loss of one or more PCIe IO slots could also occur resulting in SRCs B7006970 or B7006971.

System firmware changes that affect certain systems

  • On a system with two or more nodes, a problem was fixed for a SMP cable pull or failure that could cause a system checkstop with SRC BC14E540 logged.  This problem is limited to the SMP cables that are in the TOD topology propagation path.
VH920_078_075 / FW920.21

11/28/18
Impact:  Availability      Severity:  SPE

System firmware changes that affect all systems

  • DEFERRED:   A problem was fixed to further increase the Vio voltage level to the processors to protect against lower voltage level and noise margins that could result in degraded performance or loss of processor cores in rare instances.  The Vio voltage level was previously adjusted in FW920.20 but an additional increase of the voltage was needed to improve processor reliability.
VH920_075_075 / FW920.20

11/16/18
Impact:  Data                  Severity:  HIPER

New features and functions

  • Support was added for three and four node configurations of the  IBM Power System E980 (9080-M9S).
  • Support was added for concurrent maintenance of SMP cables.
  • Support was enabled for eRepair spare lane deployment for fabric and memory buses.
  • Support was added for run-time recovery of clock cards that have failed because of a loss of lock condition.  The Phased Lock Loop (PLL) 842S02 loses lock occasionally, but this should be a fully recoverable condition.  Without this support, the disabled clock card would have to be replaced with the system powered down.
  • Support was added for processor clock failover.
  • Support was added for Multi-Function clock card failover.

System firmware changes that affect all systems

  • HIPER/Non-Pervasive: DISRUPTIVE:  Fixes included to address potential scenarios that could result in undetected data corruption, system hangs, or system terminations.
  • DISRUPTIVE:  A problem was fixed for PCIe and SAS adapters in slots attached to a PLX (PCIe switch) failing to initialize and not being found by the Operating System.  The problem should not occur on the first IPL after an AC power cycle, but subsequent IPLs may experience the problem.
  • DEFERRED:  A problem was fixed for a possible system slow-down with many BC10E504 SRCs logged for Power Bus hang recovery.  This  could occur during periods of very high activity for memory write operations which inundate a specific memory controller.  This slow-down is localized to a specific region of real memory that is mapped to memory DIMMs associated with the congested memory controller.
  • DEFERRED:  A problem was fixed for a logical partition (LPAR) running slower than expected because of an overloaded ABUS socket for its SMP connection.  This fix requires a re-IPL of the system to balance the distribution of the LPARs to the ABUS sockets.
  • DEFERRED:  A problem was fixed for a PCIe clock failure in the  PCIe3 I/O expansion drawer (feature #EMX0), causing loss of PCIe slots.   The system must be re-IPLed for the fix to activate.
  • DEFERRED:   A problem was fixed to increase the Vio voltage level to the processors to protect against lower voltage level and noise margins that could result in degraded performance or loss of processor cores in rare instances.
  • DEFERRED:  A problem was fixed for a possible system hang in the early boot stage.  This  could occur during periods of very high activity for memory read operations which deplete all read buffers, hanging an internal process that requires a read buffer,  With the fix, a congested memory controller can stall the read pipeline to make a read buffer available for the internal processes.
  • DEFERRED:  A problem was fixed for concurrent maintenance operations for PCIe expansion drawer cable cards and PCI adapters that could cause  loss of system hardware information in the hypervisor with these side effects:  1) partition secure boots could fail with SRC BA540100 logged.; 2) Live Partition Mobility (LPM) migrations could be blocked; 3) SR-IOV adapters could be blocked from going into shared mode; 4) Power Management services could be lost; and 5) warm re-IPLs of the system can fail.  The system can be recovered by powering off and then IPLing again.
  • A problem was fixed for memory DIMM temperatures reported with an incorrect FRU callout.  The error happens for only certain memory configurations.
  • A problem was fixed for in the Dynamic Platform Organizer (DPO) for calculating the amount of memory required for partitions with Virtual Page Tables that are greater than 1 LMB in size.  This causes affinity scores which are not correct for the partition.
  • A problem was fixed for an unhelpful error message of "HSCL1473 Cannot execute atomic operation. Atomic operations are not enabled." that is displayed on the HMC if there are no licensed processors available for the boot of a partition.
  • A problem was fixed for power supply non-power faults being logged indicating a need for power supply replacement when the service processor is at a high activity level, but the power supply is working correctly.  This  service processor performance problem may also prevent legitimate power supply faults from being logged, with other communication and non-power faults being logged instead.
  • A problem was fixed for FSP cable failures with B150492F logged that have the wrong FSP cable called out.
  • A problem was fixed for a false extra Predictive Error of B1xxE550 that can occur for a node with a recoverable event  if Hostboot is terminating on a different node at the same time.  The Predictive Error log will not have a call-out and can be ignored.
  • A problem was fixed for a memory channel failure due to a RCD parity error calling out the affected DIMMs correctly, but also falsely calling out either the memory controller or a processor, or both. 
  • A problem was fixed for adapters in slots attached to a PLX (PCIe switch) failing with SRCs B7006970 and BA188002  when a second and subsequent errors on the PLX failed to initiate PLX recovery.  For this infrequent problem to occur, it requires a second error on the PLX after recovery from the first error.
  • A problem was fixed for a processor core checkstop that would deconfigure two cores:  the failed core and a working core.  The bad core is determined by matching to the error log and the false bad core will have no error log.  To recover the loss of the good core, the guard can be cleared on core that does not have an associated error log.
  • A problem was fixed for the system going into Safe Mode after a run-time deconfiguration of a processor core,  resulting in slower performance.  For this problem to occur, there must be a second fault  in the Power Management complex after the processor core has been deconfigured.
  • A problem was  fixed for service processor resets confusing the wakeup state of  processor cores, resulting in degraded cores that cannot be managed for power usage.  This will result in the system consuming more power, but also running slower due to the inability to make use of WOF optimizations around the cores.  The degraded  processor cores can be recovered by a re-IPL of the system.
  • A problem was  fixed for incorrect Resource Identification (RID) numbers for the  Analog Power Subsystem Sweep (APSS) chip, used by OCC to tune the processor frequency.  Any error call-out on the APSS may call out the wrong APSS.
  • A problem was fixed for the On-Chip Controller (OCC) MAX memory bandwidth sensor sometimes having values that are too high.
  • A problem was fixed for DDR4 memory training in the IPL to improve the DDR4 write margin.  Lesser write margins can potentially cause memory errors.
  • A problem was fixed for a system failure with SRC B700F103 that can occur if a shared-mode SR-IOV adapter is moved from a high-performance slot to a lower performance slot.   This problem can be avoided by disabling shared mode on the SR-IOV adapter; moving the adapter;  and then re-enabling shared mode.
  • A problem was fixed for the system going to Safe Mode if all the cores of a processor are lost at run-time.
  • A problem was fixed for a Core Management Engine (CME) fault causing a system failure with SRC B700F105 if processor cores had been guarded during the IPL.
  • A problem was fixed for a Core Management Engine (CME) fault that could result in a system checkstop.
  • A problem was fixed for a missing error log for the case of the TPM card not being detected when it is required for a trusted boot.
  • A problem was fixed for a flood of BC130311 SRCs that could occur when changing Energy Scale Power settings, if the Power Management is in a reset loop because of errors.
  • A problem was fixed for coherent accelerator processor proxy (CAPP) unit  errors being called out as CEC hardware Subsystem instead of  PROCESSOR_UNIT.
  • A problem was fixed for a repeating failover for the service processors if a TPM card had failed and been replaced.  The TPM update was not synchronized to the backup service processor, creating a situation where a  service processor failover could fail and retry because of mismatched TPM capabilities.
  • A problem was fixed for an incorrect processor callout on a memory channel error that causes a CHIFIR[61] checkstop on the processor.
  • A problem was fixed for a Logical LAN (l-lan) device failing to boot when there is a UDP packet checksum error.  With the fix, there is a new option when configuring a l-lan port in SMS to enable or disable the UDP checksum validation.  If the adapter is already providing the checksum validation, then the l-lan port needs to have its validation disabled.
  • A problem was fixed for a power fault causing a power guard of a node, but leaving the node configured.  This can cause redundant logging of errors as operations are continued to be attempted on failed hardware.
  • A problem was fixed for missing error logs for hardware faults if the hypervisor terminates before the faults can be processed.  With the fix, the hardware attentions for the bad FRUs will get handled, prior to processing the termination of the hypervisor.
  • A problem was fixed for the diagnostics for a system boot checkstop failing to isolate to the bad FRU if it  occurred on a non-master processor or a memory chip connected to a non-master processor.  With the fix, the  fault attentions from a non-master processor are properly isolated to the failing chip so it can be guarded or recovered as needed to allow the IPL to continue.
  • A problem was fixed for Hostboot error log IDs (EID) getting reused from one IPL to the next, resulting in error logs getting suppressed (missing)  for new problems on the subsequent IPLs if they have a re-used EID that was already present in the service processor error logs.
  • A problem was fixed so the green USB active LED is lit for the service processor that is in the primary role.  Without the fix, the green LED is always lit for the service processor in the C3 position which is FSP-A,  regardless of the role of the service processor.
  • A problem was fixed for Live Partition Mobility (LPM) partition migration to preserve the Secure Boot setting on the target partition.  Secure Boot is supported in FW920 and later partitions.  If the Secure Boot setting is non-zero for the partition, it will zero after the migration.
  • A problem was fixed for an SR-IOV adapter using the wrong Port VLAN ID (PVID) for a logical port  (VF) when its non-zero PVID could be changed following a network install using the logical port.
    This fix updates adapter firmware to 11.2.211.32  for the following Feature Codes: EN15,  EN17, EN0H, EN0J, EN0M, EN0N, EN0K, and  EN0L.
    The SR-IOV adapter firmware level update for the shared-mode adapters happens under user control to prevent unexpected temporary outages on the adapters.  A system reboot will update all SR-IOV shared-mode adapters with the new firmware level.  In addition, when an adapter is first set to SR-IOV shared mode, the adapter firmware is updated to the latest level available with the system firmware (and it is also updated automatically during maintenance operations, such as when the adapter is stopped or replaced).  And lastly, selective manual updates of the SR-IOV adapters can be performed using the Hardware Management Console (HMC).  To selectively update the adapter firmware, follow the steps given at the IBM Knowledge Center for using HMC to make the updates:   https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
    Note: Adapters that are capable of running in SR-IOV mode, but are currently running in dedicated mode and assigned to a partition, can be updated concurrently either by the OS that owns the adapter or the managing HMC (if OS is AIX or VIOS and RMC is running).
  • A problem was fixed for a SMS ping failure for a SR-IOV adapter VF with a non-zero Port VLAN ID (PVID).  This failure may occur after the partition with the adapter has been booted to AIX, and then rebooted back to SMS.  Without the fix, residue information from the AIX boot is retained for the VF that should have been cleared.
  • A problem was fixed for a SR-IOV adapter vNIC configuration error that did not provide a proper SRC to help resolve the issue of the boot device not pinging in SMS due to maximum transmission unit (MTU) size mismatch in the configuration.  The use of a vNIC backing device does not allow configuring VFs for jumbo frames when the Partition Firmware configuration for the adapter (as specified on the HMC) does not support jumbo frames.  When this happens, the vNIC adapter will fail to ping in SMS and thus cannot be used as a boot device.  With the fix,  the vNIC driver configuration code is now checking the vNIC login (open) return code so it can issue an SRC when the open fails for a MTU  issue (such as  jumbo frame mismatch) or for some other reason.  A jumbo frame is an Ethernet frame with a payload greater than the standard MTU of 1,500 bytes and can be as large as 9,000 bytes.
  • A problem was fixed for three bad lanes causing a memory channel  fail on the DMI interface.  With the fix, the errors on the third lane on the DMI interface will  be recovered and it will continue to be used as long as it functions.
  • A problem was fixed for Power Management errors occurring at higher ambient temperatures that should instead be handled by dynamic adjustments to the processor voltages and frequencies.  The Power Management errors can cause the system to drop into Sate Mode which will provide lower performance until the system is re-IPLed.
  • A problem was fixed for preventing loss of function on an SR-IOV adapter with an 8MB adapter firmware image if it is placed into SR-IOV shared mode.  The 8MB image is not supported at the FW920.20 firmware level.  With the fix, the adapter with the 8MB image is rejected with an error without an attempt to load the older 4MB image on the adapter which could damage it.  This problem affects the following  SR-IOV adapters:  #EC2R/#EC2S with CCIN 58FA; and  #EC2T/#EC2U with CCIN 58FB.
  • A problem was fixed for incorrect recovery from a service processor mailbox error that was causing the system IPL to fail with the loss of all the PCIe links.  If this occurs, the system will normally re-IPL successfully.
  • A problem was fixed for SR-IOV adapter failures when running in shared mode in a Huge Dynamic DMA Window (HDDW) slot.  I/O slots are enabled with HDDW by using the I/O Adapter Enlarged Capacity setting in the Advanced System Management Interface (ASMI).   This problem can be circumvented by moving the SR-IOV adapter to a non-HDDW slot, or alternatively, disabling HDDW on the system.
  • A problem was fixed for system termination for a re-IPL with power on with SRC B181E540 logged.  The system can be recovered by powering off and then IPLing.  This problem occurs infrequently and can be avoided by powering off the system between IPLs.
  • A problem was fixed for bad DDR4 DIMMs that fail initialization but are not guarded or called out, causing a system node to fail during a IPL.

System firmware changes that affect certain systems

  • For a shared memory partition,  a problem was fixed for Live Partition Mobility (LPM) migration hang after a Mover Service Partition (MSP) failover in the early part of the migration.  To recover from the hang, a migration stop command must be given on the HMC.  Then the migration can be retried.
  • For a shared memory partition,  a problem was fixed for Live Partition Mobility (LPM) migration failure to an indeterminate state.  This can occur if the Mover Service Partition (MSP) has a failover that occurs when the migrating partition is in the state of "Suspended."  To recover from this problem, the partition must be shutdown and restarted.
  • A problem was fixed for an AIX partition not showing the location codes for the USB controller and ports T1  and T2.  When displaying location codes with OS commands or at the SMS menus, the location of the USB controller (C13) is missing and ports T1 and T2 are swapped.
  • On a system with a Cloud Management Console and a HMC Cloud Connector, a problem was fixed for memory leaks in the Redfish server causing Out of Memory (OOM) resets of the service processor.
  • On a system with a partition with dedicated processors that are set to allow processor sharing with "Allow when partition is active" or "Allow always", a problem was fixed for a potential system hang if the partition is booting or shutting down while Dynamic Platform Optimizer (DPO) is running.  As a work-around to the problem, the processor sharing can be turned off before running DPO, or avoid starting or shutting down dedicated partitions with processor sharing while DPO is active.
  • On a system with an AMS partition, a problem was fixed for a Live Partition Mobility (LPM) migration failure when migrating from P9 to a pre-FW860 P8 or P7 system.  This failure can occur if the P9 partition is in dedicated memory mode, and the Physical Page Table (PPT) ratio is explicitly set on the HMC (rather than keeping the default value) and the partition is then transitioned to Active Memory Sharing (AMS) mode prior to the migration to the older system.  This problem can be avoided by using dedicated memory in the partition being migrated back to the older system.
VH920_057_057 / FW920.10

09/21/18
Impact:  New      Severity:  New

New Features and Functions
  • GA Level

4.0 How to Determine The Currently Installed Firmware Level

You can view the server's current firmware level on the Advanced System Management Interface (ASMI) Welcome pane. It appears in the top right corner. Example: VH920_123.


5.0 Downloading the Firmware Package

Follow the instructions on Fix Central. You must read and agree to the license agreement to obtain the firmware packages.

Note: If your HMC is not internet-connected you will need to download the new firmware level to a USB flash memory device or ftp server.


6.0 Installing the Firmware

The method used to install new firmware will depend on the release level of firmware which is currently installed on your server. The release level can be determined by the prefix of the new firmware's filename.

Example: VHxxx_yyy_zzz

Where xxx = release level

Instructions for installing firmware updates and upgrades can be found at https://www.ibm.com/support/knowledgecenter/9080-M9S/p9eh6/p9eh6_updates_sys.htm

IBM i Systems:

For information concerning IBM i Systems, go to the following URL to access Fix Central: 
http://www-933.ibm.com/support/fixcentral/

Choose "Select product", under Product Group specify "System i", under Product specify "IBM i", then Continue and specify the desired firmware PTF accordingly.

7.0 Firmware History

The complete Firmware Fix History (including HIPER descriptions)  for this Release level can be reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VH-Firmware-Hist.html

8.0 Change History

Date
Description
March 01, 2019 Fix description update for firmware level VH920_089_075 / FW920.24