SC840_079_056 / FW840.10
03/04/16 |
Impact: Availability
Severity: SPE
New features and functions
- Support for a 256GB DDR4 memory DIMM. Memory feature
code #EM8Y provides a total of 1024GB of memory with 4 each 256GB
CDIMMs (1600 MHz, 4GBIT DDR4). Note that DDR4 and DDR3
DIMMs cannot be mixed in a drawer.
- Support was added to block a full Hardware Management
Console (HMC) connection to the service processor when the HMC is at a
lower firmware major and minor release level than the service
processor. In the past, this check was done only for the major
version of the firmware release but it now has been extended to the
minor release version level as well. The HMC at the lower
firmware level can still make a limited connection to the higher
firmware level service processor. This will put the CEC in a
"Version Mismatch" state. Firmware updates are allowed with the
CEC in the "Version Mismatch" state so that the condition can be
corrected with either a HMC update or a firmware update of the CEC.
- Support for PowerVM vNIC with more vNIC client adapters for
each partition, up to 10 from a limit of 6 at the FW840.00 level.
PowerVM vNIC combines many of the best features of SR-IOV and PowerVM
SEA to provide a network solution with options for advanced functions
such as Live Partition Mobility along with better performance and I/O
efficiency when compared to PowerVM SEA. In addition PowerVM vNIC
provides users with bandwidth control (QoS) capability by leveraging
SR-IOV logical ports as the physical interface to the network.
- Support for a 10-core 4.19 GHz Power8 processor with
feature code #EPBS on the IBM Power System E880 (9119-MHE). This
feature provides a 40-core processor planar containing four ten-core
processor SCMs. Each processor core has 512KB of L2 cache and 8MB
of L3 cache.
- The default setting for the "Enlarged I/O Memory Capacity"
feature was disabled on newly manufactured E850, E870 & E880 models
to reduce hypervisor memory usage. Customers using PCI adapters
that leverage "Enlarged I/O Memory Capacity" will need to explicitly
enable this feature for the supported PCI slots, using ASMI Menus while
the system is powered off.
System firmware changes that affect all systems
- On multi-node systems with a power fault, a problem was fix
for On-Chip Controller errors caused by the power fault being reported
as predictive errors for SRC B1602ACB. These have been corrected
to be informational error logs. If running without the fix, the
predictive and unrecoverable errors logged for the OCC on loss of power
to the node can be ignored.
- A problem was fixed for a system IPL hang at C100C1B0 with
SRC 1100D001 when the power supplies have failed to supply the
necessary 12-volt output for the system. The 1100D001 SRC
was calling out the planar when it should have called out the power
supplies. With the fix, the system will terminate as needed and
call out the power supply for replacement. One mode of power
supply failure that could trigger the hang is sync-FET failures that
disrupt the 12-volt output.
- A problem was fixed for a PCIe3 I/O expansion drawer
(#EMX0) not getting all error logs reported when its error log queue is
full. In the case where the error log queue is full with 16
entries, only one entry is returned to the hypervisor for
reporting. This error log truncation only occurs during periods
of high error activity in the expansion drawer.
- A problem was fixed for the callout of a VPD collection
fault and system termination with SRC 11008402 to include the 1.2vcs
VRM FRU. The power good fault fault for the 1.2 volts would be a
primary cause of this error. Without the fix, the VRM is missing
in the callout list and only has the VPDPART isolation procedure.
- A problem was fixed for excessive logging of the SRC
11002610 on a power good (pgood) fault when detected by the Digital
Power Subsystem Sweep (DPSS). Multiple pgood interrupts are
signaled by the DPSS in the interval between the first pgood failure
and the node power down. A threshold was added to limit the
number of error logs for the condition.
- A problem was fixed for redundant logging of the SRC
B1504804 for a fan failure, once every five seconds. With the
fix, the failure is logged only at the initial time of failure in the
IPL.
- A problem was fixed to speed recovery for VPD collection
time-out errors for PCIe resources in an I/O drawer logged with SRC
10009133 during concurrent firmware updates. With the fix, the
hypervisor is notified as soon as the VPD collection has finished so
the PCIe resources can report as available . Without the fix,
there is a delay as long as two hours for the recovery to complete.
- A problem was fixed to allow IPMI entity IDs to be used in
ipmitool raw commands on the service processor to get the temperature
reading. Without the fix, the DCMI entity IDs have to be used in
the raw command for the "Get temperature" function.
- A problem was fixed for a false unrecoverable error (UE)
logged for B1822713 when an invalid cooling zone is found during the
adjustment of the system fan speeds. This error can be ignored as
it does not represent a problem with the fans.
- A problem was fixed for a processor clock failover error
with SRC B158CC62 calling out all processors instead of isolating to
the suspect processor. The callout priority correctly has a clock
and a procedure callout as the highest priority, and these should be
performed first to resolve the problem before moving on to the
processors.
- A problem was fixed for loss of back-level protection
during firmware updates if an anchor card has been replaced. The
Power system manufacturing process sets the minimum code level a system
is allowed to have for proper operation. If a anchor card is
replaced, it is possible that the replacement anchor card is one that
has the Minimum MIF Level (MinMifLevel) given as "blank", and
this removes the system back-level protection. With the fix, blanks or
nulls on the anchor card for this field are handled correctly to
preserve the back-level protection. Systems that have already
lost the back-level protection due to anchor card replacement remain
vulnerable to a accidental downgrade of code level by operator error,
so code updates to a lower level for these systems should only be
performed under guidance from IBM Support. The following command
can be run the Advanced Management Management Interface (ASMI) to
determine if the system has lost the back-level protection with the
presence of "blanks" or ASCII 20 values for MinMifLevel:
"registry -l cupd/MinMifLevel" with output:
"cupd/MinMifLevel:
2020202020202020 2020202020202020 [ ]
2020202020202020 2020202020202020 [ ]"
- A problem was fixed for a code update error from FW830 to a
FW840 level causes temperature sensors to be lost so that the ipmitool
command to list the temperature sensors fails with a IPMI program core
dump. If the temperature sensors are already corrupted due to a
preceding code update, this fix adds back in the temperature sensors to
allow the ipmitool to work for listing the temperature sensors.
- A problem was fixed for a system checkstop caused by a L2
cache least-recently used (LRU) error that should have been a
recoverable error for the processor and the cache. The cache
error should not have caused a L2 HW CTL error checkstop.
- A problem was fixed for a re-IPL with power on failure with
B181A40F SRC logged for VPD not found for a DIMM FRU. The DIMM
had been moved to another slot or just removed. In this
situation, a IPL of the system from power off will work without errors,
but a re-IPL with power on, such as that done after processing a
hardware dump, will fail with the B181A40F. Power off the system
and IPL to recover. Until the fix is applied, the problem can be
circumvented after a DIMM memory move by putting the PNOR flash memory
in genesis mode by running the following commands in ASMI with the CEC
powered off:
1) hwsvPnorCmd -c
2) hwsvPnorCmd -g
- A problem was fixed for the service processor becoming
inaccessible when having a dynamic IP address and being in DCMI
"non-random" mode for DHCP discovery by customer configuration.
The problem can occur intermittently during a AC power on of the
system. If the service processor does not respond on the network,
AC power cycle to recover. Without the fix, the problem can be
circumvented by using the DHCP client in the DCMI "random" mode for
DHCP discovery, which is the default on the service processor.
- A problem was fixed for priority callouts for system clock
card errors with SRC B158CC62. These errors had high priority
callouts for the system clock card and medium callouts for FRUs in the
clock path. With the fix, all callouts are set to medium priority
as the clock card is not the most probable FRU to have failed but is
just a candidate among the many FRUs along the clock path.
- A problem was fixed for a memory initialization error
reported with SRC BC8A0506 that terminates the IPL. This problem
is unlikely to occur because it depends on a specific memory location
being used by the code load. The system can be recovered from the error
by doing another IPL.
System firmware changes that affect certain systems
- On PowerVM systems
a problem was fixed to address a performance
degradation. The problem surfaces under the following conditions:
1) There is at least one VIOS or Linux partition that
is running with dedicated processors AND
2) There is at least one VIOS or Linux partition
running with shared processors AND
3) There is at least one AIX or IBMi partitions
configured with shared processors.
If ALL the above conditions are met AND one of the following actions
occur,
1) VIOS/Linux dedicated processor partition is
configured to share processors while active OR
2) A dynamic platform optimization operation (HMC
'optmem' command) is performed OR
3) Processors are unlicensed via a capacity on demand
operation
there is an exposure for a loss in performance.
- On systems using
PowerVM firmware, a problem was fixed for PCIe switch recovery to
prevent a partition switch failure during the IPL with error logs for
SRC B7006A22 and B7006971 reported. This problem can occur
when doing recovery for an informational error on the switch. If
this problem occurs, the partition must be restarted to recover the
affected I/O adapters.
- On systems using PowerVM firmware, a problem was fixed for
a concurrent FRU exchange of a CAPI (Coherent Accelerator
Processor Interface) adapter for a standard I/O adapter that results in
a vary off failure. If this failure occurs, the system needs to
be re-IPLed to fix the adapter. The trigger for this failure is a
dual exchange where the CAPI adapter is exchanged first for a standard
(non-like-typed) adapter. Then an attempt is made to exchange the
standard adapter for a CAPI adapter which fails.
- On systems using PowerVM firmware, a problem was fixed for
a CAPI (Coherent Accelerator Processor Interface) device going to
a "Defined" state instead of "Available" after a partition boot.
If the CAPI device is doing recovery and logging error data at the time
of the partition boot, the error may occur. To recover from the
error, reboot the partition. With the fix, the hypervisor will
wait for the logging of error data from the CAPI device to finish
before proceeding with the partition boot.
- On systems using PowerVM firmware, a problem was fixed for
a hypervisor adjunct partition failed with "SRC B2009008 LP=32770" for
an unexpected SR-IOV adapter configuration. Without the fix, the
system must be re-IPLed to correct the adjunct error. This error
is infrequent and can only occur if an adapter port configuration is
being changed at the same time that error recovery is occurring for the
adapter.
- On systems using PowerVM firmware and PCIe adapters in
SR-IOV mode, the following problem was addressed with a Broadcom
Limited (formerly known as Avago Technologies and Emulex) adapter
firmware update to 10.2.252.1913: Transmit time-outs on a Virtual
Function (VF) during stressful network traffic.
- On systems using PowerVM firmware with an invalid P-side or
T-side in the firmware, a problem was fixed in the partition firmware
Real-Time Abstraction System (RTAS) so that system Vital Product Data
(VPD) is returned at least from the valid side instead of returning no
VPD data. This allows AIX host commands such as lsmcode,
lsvpd, and lsattr that rely on the VPD data to work to some extent even
if there is one bad code side. Without the fix, all the VPD
data is blocked from the OS until the invalid code side is recovered by
either rejecting the firmware update or attempting to update the system
firmware again.
- On systems using PowerVM firmware without a HMC (and in
Manufacturing Default Configuration (MDC) mode with a single host
partition), a problem was fixed for missing dumps of type SYSDUMP.
FSPDUMP. LOGDUMP, and RSCDUMP that were not off-loaded to the host
OS. This is an infrequent error caused by a timing error that
causes the dump notification signal to the host OS to be lost.
The missing/pending dumps can be retrieved by rebooting the host OS
partition. The rebooted host OS will receive new notifications of
the dumps that have to be off-loaded.
- On systems using PowerVM firmware, a problem was fixed for
truncation on the memory fields displayed in the Advanced System
Management Interface on the COD panels. ASMI shows three fields
of memory called "Installed memory", Permanent memory", and "Inactive
memory". The largest value that can be displayed in the fields
was "9999" GB. This has been expanded to a maximum of "999999" GB
for each of the ASMI fields. The truncation was only in the
displayed memory value, not in the actual memory size being used by the
system which was correct.
- On systems using PowerVM firmware and a partition using
Active memory Sharing (AMS), a problem was fixed for a Live Partition
Mobility (LPM) migration of the AMS partition that can hang the
hypervisor on the target CEC. When an AMS partition migrates to
the target CEC, a hang condition can occur after processors are resumed
on the target CEC, but before the migration operation completes.
The hang will prevent the migration from completing, and will likely
require a CEC reboot to recover the hung processors. For this
problem to occur, there needs to be memory page-based activity (e.g.
AMS dedup or Pool paging) that occurs exactly at the same time that the
Dirty Page Manager's PSR data for that page is being sent to the target
CEC.
- On systems using PowerVM firmware and having a IBM i
partition with more than 64 cores, a performance problem was fixed with
the choice of processor cores assigned to the partition.
This problem only applies to the E870 (9119-MME) and E880 (9119-MHE)
models.
- On systems using PowerVM firmware, a problem was fixed for
PCIe adapter hangs and network traffic error recovery during Live
Partition Mobility (LPM) and SR-IOV vNIC (virtual ethernet
adapter) operations. An error in the PCI Host Bridge (PHB)
hardware can persist in the L3 cache and fail all subsequent network
traffic through the PHB. The PHB error recovery was
enhanced to flush the PHB L3 cache to allow network traffic to resume.
- On systems using PowerVM firmware with AIX or Linux
partitions with greater than 8TB of memory, a problem was fixed for
Dynamic DMA Window (DDW) enabled adapters IPLing into a "Defined"
state, instead of "Available", and unusable with a "0" size DMA
window. If a DDW enabled adapter is plugged into an HDDW (Huge
Dynamic DMA Window) slot in a partition with the large memory size, the
OS changes the default DMA window to "0" in size. To prevent this
problem, the Advanced System Management Interface (ASMI) in the service
processor can be used to set "I/O Enlarged Capacity" to "0" (which is
off), and all the DDW enabled adapters will work on the next IPL.
- On a multi-node system, a problem was fixed for a
power fault with SRC 11002610 having incorrect FRU callouts. The
wrong second FRU callout is made on nodes 2, 3, and 4 of a multi-node
system. Instead of calling out the processor FRU, the enclosure
FRU is called out. The first FRU callout is correct.
- On PowerVM systems with partitions running Linux, a problem
was fixed for intermittent hangs following a Live Partition Mobility
(LPM) migration of a Linux partition. A partition migrating from
a source system running FW840.00 to a system running any other
supported firmware level may become unresponsive and unusable once it
arrives on the target system. The problem only affects Linux
partitions and is intermittent. Only partitions that have
previously been migrated to a FW840.00 system are susceptible to a hang
on subsequent migration to another system. If a partition is hung
following a LPM migration, it must be rebooted on the target system to
resume operations.
- On systems using OPAL firmware, a problem was fixed that
prevented multiple NVIDIA Tesla K80 GPUs from being attached to one
PCIe adapter. This prevented using a PCIe attached GPU
drawer. This fix increases the PCIe MMIO (memory-mapped I/O)
space to 1 TB from a previous maximum of 64 GB per PHB/PCIe slot.
- On PowerVM systems with dedicated processor partitions with
low I/O utilization, the dedicated processor partition may become
intermittently unresponsive. The problem can be circumvented by
changing the partition to use shared processors.
- On systems using OPAL firmware, a problem was fixed in OPAL
to identify the PCI Host Bridge (PHB) on CAPI adapter errors and not
always assume PHB0.
- On systems using OPAL firmware, a problem was fixed in the
OPAL gard utility to remove gard records after guarded components have
been replaced, Without the fix, Hostboot and the gard utility
could be in disagreement on the replaced components, causing some
components to still display as guarded after a repair.
- On systems using PowerVM firmware with partitions with very
large number of PCIe adapters, a problem was fixed for partitions that
would hang because the partition firmware ran out of memory for the
OpenFirmware FCode device drivers for PCIe adapters. With the
fix, the hypervisor is able to dynamically increase the memory to
accommodate the larger partition configurations of I/O slots and
adapters.
- On PowerVM systems with vNIC adapters, a problem was fixed
for doing a network boot or install from the adapter using a VLAN
tag. Without the fix, the support is missing for doing a network
boot from the VLAN tag from the SMS RIPL menu.
- On systems using PowerVM firmware, a problem was fixed for
a Live Partition Mobility (LPM) migration of a partition with large
memory that had a migration abort when the partition took longer than
five minutes to suspend. This is a rare problem and is triggered
by an abnormally slow response time from the migrating partition.
With the fix, the five minute time limit on the suspend operation has
been removed.
|
SC840_056_056 / FW840.00
12/04/15 |
Impact:
New
Severity: New
New Features and Functions
NOTE:
- POWER8 (and later) servers include an “update access key”
that is checked when system firmware updates are applied to the
system. The initial update access keys include an expiration date
which is tied to the product warranty. System firmware updates will not
be processed if the GA date of the desired firmware level occurred
after the update access key’s expiration date. As these update
access keys expire, they need to be replaced using either the Hardware
Management Console (HMC) or the Advanced Management Interface (ASMI) on
the service processor. Update access keys can be obtained via the
key management website: http://www.ibm.com/servers/eserver/ess/index.wss.
- Support for allowing the PowerVM hypervisor to continue to
run when communication between the service processor and platform
firmware has been lost and cannot be re-established. A SRC
B1817212 may be logged and any active partitions will continue to run
but they will not be able to be managed by the management
console. The partitions can be allowed to run until the next
scheduled service window at which time the service processor can be
recovered with an AC power cycle or a pin-hole reset from the operator
panel. This error condition would only be seen on a system that
had been running with a single service processor (no redundancy for the
service processor).
- Support in the Advanced Systems Management Interface (ASMI)
for managing certificates on the service processor with option "System
Configuration/Security/Certificate Management". Certificate
management includes 1) Generation of Certificate Signing Request (CSR)
2) Download of CSR and 3) Upload of signed certificates. For more
information on managing certificates, go to the IBM KnowledgeCenter
link for "Certificate Management"
(https://www-01.ibm.com/support/knowledgecenter/P8ESS/p8hby/p8hby_securitycertificate.htm)
- Support for concurrent add of the PCIe expansion drawer
(F/C #EMX0) and concurrent add of PCIe optical cable adapters (F/C EJ07
and CCIN 6B52). For concurrent add guidance, go to the IBM
KnowledgeCenter links for "Connecting a PCIe Gen3 I/O expansion drawer
to your system"(https://www-01.ibm.com/support/knowledgecenter/9119-MHE/p8egp/p8egp_connect_kickoff.htm?lang=en-us)
and for "PCIe adapters for the 9119-MHE and 9119-MME" (https://www-01.ibm.com/support/knowledgecenter/9119-MHE/p8hak/p8hak_87x_88x_kickoff.htm?lang=en-us).
- Support for concurrent repair/exchange of the PCIe3 6-slot
Fanout module for the PCIe3 Expansion Drawer, PCIe Optical Cable
adapters and PCIe3 Optical Cable. For concurrent repair/exchange
guidance for these parts, go to the IBM KnowledgeCenter link for
"Removing and replacing parts in the PCIe Gen3 I/O expansion drawer"(https://www-01.ibm.com/support/knowledgecenter/9119-MHE/p8egr/p8egr_emx0_kickoff.htm?lang=en-us).
Below are the feature codes for the affected parts:
#EMX0 - PCIe3 Expansion Drawer
#EMXF - PCIe3 6-Slot Fanout Module for PCIe3 Expansion Drawer (all
server models)
#EJ07 (CCIN 6B52) - PCIe3 Optical Cable Adapter for PCIe3 Expansion
Drawer
#ECC6 - 2M Optical Cable Pair for PCIe3 Expansion Drawer
#ECC8 - 10M Optical Cable Pair for PCIe3 Expansion Drawer
#ECC9 - 20M Optical Cable Pair for PCIe3 Expansion Drawer
- PowerVM support for Support for Coherent Accelerator
Processor Interface (CAPI) adapters. The PCIe3 LP CAPI
Accelerator Adapter with F/C #EJ16 is used on the S812L(8247-21L) and
S822L (8247-22L) models The PCIe3 CAPI FlashSystem
Acclerator Adapter with F/C #EJ17 is used on the S814(8286-41A)
and S824(8286-42A) models. The PCIe3 CAPI FlashSystem Accelerator
Adapter with F/C #EJ18 is used on the S822(8284-22A), E870(9119-MME),
and E880(9119-MHE) models. This feature does not apply to the
S824L (8247-42L) model.
- Management console enhancements for support of concurrent
maintenance of CAPI-enabled adapters.
- Support for
PCIe3 Expansion Drawer (#EMX0) lower cable failover, using lane
reversal mode to bring up the expansion drawer from the top
cable. This eliminates a single point of failure by supporting
lane reversal in case of problems with the lower cable.
- Expanded support of Virtual Ethernet Large send from IPv4
to the IPv6 protocol in PowerVM.
- Support for IBM i network install on a IEEE 802.1Q
VLAN. The OS supported levels are IBM i.7.2.TR3 or later.
This feature applies only to S814 (8286-41A), S824(8286-42A), E870
(9119-MME), and E880 (9119-MHE) models.
- Support for PowerVM vNIC with up to six vNIC client
adapters for each partition. PowerVM vNIC combines many of the
best features of SR-IOV and PowerVM SEA to provide a network solution
with options for advanced functions such as Live Partition Mobility
along with better performance and I/O efficiency when compared to
PowerVM SEA. In addition PowerVM vNIC provides users with
bandwidth control (QoS) capability by leveraging SR-IOV logical ports
as the physical interface to the network.
Note: If more than six vNIC client adapters are used in a
partition, the partition will run, as there is no check to prevent the
extra adapters, but certain operations such as Live Partition Mobility
may fail.
- Enhanced handling of errors to allow partial data in a
Shared Storage Pool (SSP) cluster. Under partial data error
conditions, the management console "Manage PowerVM" gui will correctly
show the working VIOS clusters along with information about the broken
VIOS clusters, instead of showing no data.
- Live Partition Mobility (LPM) was enhanced to allow the
user to specify VIOS concurrency level overrides.
- Support was added for PowerVM hard compliance enforcement
of the Power Integrated Facility for Linux (IFL). IFL is an
optional lower cost per processor core activation for Linux-only
workloads on IBM Power Systems. Power IFL processor cores can be
activated that are restricted to running Linux workloads. In
contrast, processor cores that are activated for general-purpose
workloads can run any supported operating system. PowerVM will
block partition activation, LPM and DLPAR requests on a system with IFL
processors configured if the total entitlement of AIX and IBMi
partitions exceeds the amount of licensed general-purpose
processors. For AIX and IBMi partitions configured with uncapped
processors, the PowerVM hypervisor will limit the entitlement and
uncapped resources consumed to the amount of expensive processors that
are currently licensed.
- Support was added to allow Power Enterprise Pools to
convert permanently-licensed (static) processors to Pool Processors
using a CPOD COD activation code provided by the management
console. Previously, only unlicensed processors were able to
become Pool Processors.
- The management console was enhanced to allow a Live
Partition Mobility (LPM) if there is a failed VIOS in a redundant
pair. During LPM, if the VIOS is inactive, the management console
will use stored configuration information to perform the LPM.
- The firmware update process from the management console and
from in-band OS (except for IBM i PTFs) has been enhanced to download
new "Update access keys" as needed to prevent the access key from
expiring. This provides an automatic renewal process for the
entitled customer.
- Live Partition Mobility support was added to allow the user
to specify a different virtual Ethernet switch on the target server.
- PowerVM was enhanced to support an AIX Live Update where
the AIX kernel is updated without rebooting the kernel. The AIX
OS level must be 7.2 or later. Starting with AIX Version 7.2, the AIX
operating system provides the AIX Live Update function which eliminates
downtime associated with patching the AIX operating system. Previous
releases of AIX required systems to be rebooted after an interim fix
was applied to a running system. This new feature allows workloads to
remain active during a Live Update operation and the operating system
can use the interim fix immediately without needing to restart the
entire system. In the first release of this feature, AIX Live Update
will allow customers to install interim fixes (ifixes) only. For more
information on AIX Live Update, go to the IBM KnowledgeCenter
link for "Live Update"
(https://www-01.ibm.com/support/knowledgecenter//ssw_aix_72/com.ibm.aix.install/live_update_install.htm).
- The management console has been enhanced to use standard
FTP in its firmware update process instead of a custom
implementation. This will provide a more consistent interface for
the users.
- Support for setting Power Management Tuning Parameters from
the management console (Fixed Maximum Frequency (FMF), Idle Power Save,
and DPS Tunables) without needing to use the Advanced System Management
Interface (ASMI) on the service processor. This allows FMF mode
to be set by default without having to modify any tunable parameters
using ASMI.
- Support for a Corsa PCIe adapter with accelerator FPGA for
low latency connection using CAPI (Coherent Accelerator Processor
Interface) attached to a FlashSystem 900 using two 8Gb optical SR Fibre
Channel (FC) connections.
Supported IBM Power Systems for this feature are the following:
1) E880 (9119-MHE) with CAPI Activation feature #EC19 and Corsa
adapter #EJ18 Low profile on AIX.
2) E870 (9119-MME) with CAPI Activation feature #EC18 and Corsa adapter
#EJ18.Low profile on AIX.
3) S822 (8284-22A) with CAPI Activation feature #EC2A and Corsa
adapter #EJ18.Low profile on AIX.
4) S814 (8286-41A) with CAPI Activation feature #EC2A and Corsa adapter
#EJ17 Full height on AIX.
5) S824 (8286-42A) with CAPI Activation feature #EC2A and Corsa adapter
#EJ17 Full height on AIX.
6) S812L (8247-21L) with CAPI Activation feature #EC2A and Corsa
adapter #EJ16 Low profile on Linux.
7) S822L (8247-22L) with CAPI Activation feature #EC2A and Corsa
adapter #EJ16 Low profile on Linux.
OS levels that support this feature are PowerVM AIX 7.2 or later and
OPAL bare-metal Linux Ubuntu 15.10.
The IBM FlashSystem 900 storage system is model 9840-AE2 (one year
warranty) or 9843-AE2 (three year warranty) at the 1.4.0.0 or later
firmware level with features codes #AF23, #AF24, and #AF25 supported
for 1.2 TB, 2.9 TB, 5.7 TB modules, respectively.
- The Digital Power Subsystem Sweep (DPSS) FPGA, used to
control P8 fan speeds and memory voltages, was enhanced to support the
840 GA level. This DPSS update is delayed to the next IPL of the CEC
and adds 18 to 20 minutes to the IPL. See the "Concurrent
Firmware Updates" section above for details.
- Support for Data Center Manageability Interface (DCMI) V1.5
and Energy Star compliance. DCMI features were added to the
Intelligent Platform Management Interface (IPMI) 2.0 implementation on
the service processor. DCMI adds platform management capability
for monitoring elements such as system temperatures, power supplies,
and bus errors. It also includes automatic and manually driven
recovery capabilities such as local or remote system resets, power
on/off operations, logging of abnormal or "out-of-range‟
conditions for later examination. And It allows querying for
inventory information that can help identify a failed hardware unit
along with power management options for getting and setting power
limits.
Note: A deviation from the DCMI V1.5 specification exists for
840.00 for the DCMI Configuration Parameters for DHCP Discovery.
Random back-off mode is enabled by default instead of being
disabled. The random back-off puts a random variation delay in
the DHCP retry interval so that the DHCP clients are not responding at
the same time. Disabling the back-off time is not required for normal
operations, but if wanted, the system administrator can override the
default and disable the random back-off mode by sending the “SET DCMI
Configuration Parameters” for the random back-off property of the
Discovery Configuration parameter. A value of "0" for the bit
means "Disabled".
|