1. Power Systems Sales
Certifications
C4040-252
IBM Certified Technical Sales Specialist - Power
Systems with POWER8 Enterprise V1
C4040-251
IBM Certified Technical Sales Specialist - Power
Systems with POWER8 Scale-out V1
C4040-250
IBM Certified Sales Specialist - Power Systems with
POWER8 V1
5. S814 model
If 4x core,
RAM 16GB to 64GB
Max 10x 2.5” disk bays & 1x DVD bay
6x/8x core processor
RAM 16GB to 1TB
Supports rack-mount or tower configuration
4x/6x core processor
PCIe Slots : 7 GEN3 full-high (hot-pluggable)
PCIe Gen3 (x8 speed) : 5 slots
PCIe Gen3 (x16 speed) : 2 slots
Support AIX, IBM i (7.1 or later) and Linux
Eligible for Capacity BackUp (CBU) for IBM i
6. S812L model
Support Linuxes only
Bare-metal (uBuntu only) or PowerVM or PowerKVM
10 or 12 cores
RAM 16GB to 512GB
PCIe Slots: 6 GEN3 low profile (hot-pluggable)
PCIe Gen3 (x8 speed) : 4 slots
PCIe Gen3 (x16 speed) : 2 slots
7. S822/S822L model
Network Equipment-Building System Level 3
compliant
Both sockets must be populated
16 or 20
RAM 32GB up to 1TB
PCIe Slots: 9 GEN3 Low profile (hot-pluggable)
PCIe Gen3 (x8 speed) : 5 slots
PCIe Gen3 (x16 speed) : 4 slots
PowerVM, PowerKVM
8. S824 model
Single/both sockets populated
Single socket : 6 or 8 cores
Both sockets : 12 or 16 or 24 cores
RAM 32GB up to 2TB
7 to 11 PCIe Gen3 full-high slots
Support PowerVM only
Support AIX, IBM i (7.1 or later) and Linux
Eligible for Capacity BackUp (CBU) for IBM i
9. S824L model
Two different configuration – Apr 2015 update
With NVIDIA GPU (available thru 2 sockets)
Without NVIDIA GPU (1 or 2 sockets)
Supported Linux OS : Ubuntu (with GPU), RHES, SuSE
One or Two sockets, 4U
Maximum of 16 DDR3 CDIMMs slots: 2TB system memories
Virtualization : PowerVM (non-GPU configuration), PowerKVM
not supported
Bare-metal mode : Ubuntu Linux operating system
11. Expansion drawers
Need more PCIe slots?
PCIe Gen3 I/O expansion drawer
Maximum slots
Need more disks
EXP24S Gen2 SAS HDD/SSD expansion drawer
14. Enterprise Model – E870/E880
4 x POWER8 SCM processors /system node
32 CDIMM slots (up to 4TB/system node)
8 slots of PCIe Gen3 (x16 speeds)
Up to 192 processor cores
Optical interface to I/O drawer
No integrated SAS bays or SAS controllers in node
No integrated DVD bay or DVD controllers in node
No integrated Ethernet ports in node
No tape bay in node
24. E870/880 Racking - “Only
Enterprise 42U Rack”
Only the IBM Enterprise rack (7014-T42 or #0553) have
been tested/certified by IBM Development/Test with
the E870/E880 as of Oct 2014. Therefore the 7014-T42
or #0553 is the only rack IBM Manufacturing uses
withE870/E880 system nodes.
If a different rack (IBM or non-IBM) is desired, work with
IBM Service organization to confirm the rack has the
needed strength, rigidity, hole spacing, clearances, etc.
IBM Service does not require certification by IBM
Development/Test to be able to provide
service/warranty in other racks.
39. EXP24S SFF-2 drawer
24 HDD/SSDs bays
Supports only 2.5” SFF-2 or Gen2 SAS drives
Redundant AC power supplies
Two power cords
Connect two/dual IOA to either SAS ports at the rear of
POWER8 server or through PCIe SAS adapters
41. CAPI : Coherent Accelerator
Processor Interface
Allow accelerator plugged into a PCIe slot to access the
processor bus using a low latency, high-speed protocol
interface
Seen as additional processor in system
Ability to access shared memory blocks directly
Customized functions in FPGA/ASIC to CAPI
Can only be inserted onto PCIe Gen3 x16 slots
Applications : Aerodynamic analysis, big data analytics,
radiotherapy treatments, scientific, weather forecast
apps and etc
49. E870 / E880 List of supported
PCIe adapters
As of October 2014
IBM knowledge Center
For latest update
IBMers:
http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIn
dex/TD105846
Partners
http://partners.boulder.ibm.com/src/atsmastr.nsf/WebInd
ex/TD105846
50. POWER8 Memory Card
Memory features : 16GB / 32GB / 64GB / 128GB
DDR3 1600 MHz
RAS - Memory Sparing
8 CDIMMs/cards per socket
Memory is not hog pluggable
Up to 8TB (E870) or 16TB (E880)
Up to 85.3 GB / core
ChipKill Correction
51. Cache
L1 cache
32KB per core for instruction cache
64KB per core for data cache
512KB of L2 cache per core
8MB of L3 cache per core (shareable among cores)
16MB of L4 cache per CDIMM
1 socket can populate 8 CDIMMs ~ 128MB
2 sockets will have 256MB of L4 cache
L4 cache within CDIMM
57. Graphics Processing Units (GPU) /
Compute Intensive Accelerators
Developed by NVIDIA
Offload processor-intensive operations to GPU
accelerator
Based on K40 Tesla adapters
Occupy two PCIe slots
Occupies x16 PCIe slots only
Only Linux supported (Ubuntu with S824L)
58. Single Root I/O Virtualization
(SR-IOV)
Allows multiple OS to simultaneously share PCIe adapter
Required PowerVM Standard and Enterprise Edition
Self-virtualizing
Adapter consolidation
Virtualization done in HW and less in SW
AIX linux and IBM i support
QOS
59. PowerSC – Security
Design for enterprise security and compliance in a
virtualized infrastructures and cloud environments.
Provides pre-built system profiles with security and
compliance automation
Security and Compliance functionality
Security and Compliance Automation
Trusted Boot
Trusted Firewall
Trusted Logging
Real-Time Compliance
Trusted network Connect and Patch Management
Trusted surveyor
60. PowerVC : Virtualization
Center
Simplify management of virtual resources in POWER
environment
VM image capture, deployment, resizing and management
Policy-based VM placement
VM Mobility
Management system
Integrated management (storage, network, compute)
No-menus interface with three simple configuration steps
Built on OpenStack
Manage AIX, IBM i and Linux VMs
Support both PowerVM and PowerKVM virtualization
61. PowerVP : Virtualization
Performance
Is integrated with Power Hypervisor
Easy to read display using GUI dashboard
Supports AIX, IBM i and Linux
Assist in following ways
Show workloads in real time
Find and display bottlenecks
Can replay saved historical data
Resolve performance-related issues
Address future issues that can affect performance
63. Active Memory Expansion
(AME)
Expand memory beyond physical limits
Will vary based on workload and level of memory
expansion being used
Additional CPU utilization
Only available for AIX operating system
Significantly reduce price than P795
65. Virtual I/O Server
Part of PowerVM
Virtualize network (LAN) and storage (SAN)
Share single physical resource to client partitions
Present disks as vSCSI to client partitions
Present network as virtual Ethernet to client partitions
Configure SEA to bridge virtual Ethernet to physical
Ethernet
Single VIOS can provide network and storage
redundancy to client partitions
Dual VIOS for VIOS redundancy
66. Live Partition Mobility (LPM)
Require PowerVM Enterprise Edition
Ability to move LPAR from server to server – live!
In event of maintenance patching to server - reboot
Provide applications uptime 24 by 7
Moving LPAR can only contain virtual resources
Virtual SCSI
Virtual Fibre Channel (VFC)
Virtual Ethernet
Default virtual serial adapters
67. ChipKill Memory Correction
Advanced checking and correcting (ECC) computer
memory technology
Reduces changes of system downtime caused by
memory chip failures.
Similar to RAID for disk subsystem
Each CDIMM has 10 DRAM modules
8 for data
1 for parity data
1 for spare
Fault tolerance up to 4 DRAM modules
Standard ECC measured at 91%, Chipkill at 99.94%
68. First-Failure Data Capture
Root cause isolation
Fault information will be collected
Re-creating diag tests for failures or requiring user
intervention is unnecessary
69. Scale Out RAS
Processor Instruction Retry
Alternate Processor Recovery
Selective Dynamic Firmware Updates
ChipKill Memory
Error Correcting Code (ECC) L2 and L3 cache
Service processor with fault monitoring
Hot-swappable disk bays
Hot-pluggable and redundant power supplies and cooling
fans
Dynamic processor deallocation
Extended error handling on PCI slots
70. Enterprise systems RAS
details
Active Memory Mirroring
Dynamic Processor Sparing
Dynamic Memory Sparing
Redundant Service Processor
Redundant System Clock Cards
71. New RAS Features in POWER8
Enterprise Memory recovery methodoly
Previously only on Enterprise power servers
Integrated power cooling and monitor on the POWER8
processor
PCIe controller integrated in processor eliminating
external I/O hub controller
Previously uses GX++ a proprietary interface
PCIe HotPlug added for serviceability
72. OpenPOWER Foundation
Founded 2013
Enable member companies to customize POWER CPU
processors and system platforms for optimization and
innovation for their business needs
Support System p, 64-bit versions of Linux and KVM
hypervisor
113 members
73. IBM Cloud Manager with
OpenStack
Entry level to private cloud
Former SCE (SmartCloud Entry)
Provide flexible and easy way to manage cloud
infrastructure
Self-service provisioning of infrastructure services
capture & manage standard VM images with support for
common business processes
Starting and shutting down servers
Resizing existing servers
track/correlate cost of infrastructure to department usage
via basic usage metering
74. IBM Cloud Orchestractor
Based on open standards (OpenStacks)
Advanced level, builds on top of IBM Cloud Manager
Reduces number of steps on manage public, hybrid clouds
by using an easy-to-use interface
Access to ready-to-use patterns
Quickly deploy and scale on-premise and off-premise cloud
services.
Provision and scale cloud resources.
Reduce administrator workloads and error-prone manual IT
administrator tasks.
Integrate with existing environments using application
program interfaces and tooling extensions .
Deliver services with IBM SoftLayer, existing OpenStack
platforms, PowerVM, IBM System z, VMware or Amazon EC2.
77. Capacity on Demand (CoD)
(1 of 2)
Static CoD :
Activations are permanent
Not eligible to be relocated
Mobile CoD
Activations are permanent
Eligible to be relocated (within defined enterprise pool)
Elastic (On/Off) CoD
Activations are temporary
Not eligible to be relocated.
Covers 24 hours from activation time
Utility
Almost similar to Elastic
Processor minute is charged
Power IFLs
For use in Linux partitions only
Activations are permanent
Not eligible to be relocated
79. POWER Enterprise Pool
Dynamically move processor and memory activations
between systems within a defined pool
Require Mobile features of Capacity of Demand (CoD)
Supported platform : 770, 780, 795, E870 and E880
Mix processor generation within pool type
Applications:
LPM : Partition relocation, mobile activations shifted between
LPM servers
PowerHA : When primary server fails, move mobile activations
to backup server
Rebalance server capacity: Mobile activation moved between
servers to make best use of CPU core and memory resources
80. POWER Enterprise Pool (PEP)
- Configuration compliance
Single server can be member of only one Power
Enterprise Pool at a time
Customers must complete both PEP agreements
Server must join a POWER Enterprise Pool type based on
its processor class of system
Mid Range : 770 and E870
High Range : 780, 795 and E880
Each pool will tie to only one master HMC that manages
all its resources
Master HMC holds the key
All HW & SW maintenance coverage must be consistent
81. RoCE RDMA over Converged
Ethernet
First introduced in Z series as memory-to-memory copy
network connection
Introduced to AIX 7 with PCIe2 10GbE RoCE adapter
As remote direct memory access capable device only
Provides better performance than an NIC adapter for
network-intensive applications.
Supported mode and type
82. PowerCare – Enterprise Model
only
Provide technical consultation at no additional charge
Service options include:
Enterprise Systems Optimization
Power Server Virtualization including LPM
Power Systems Availability
SAP HANA
PowerHA System Mmirror
And many more ( http://www-
03.ibm.com/systems/power/support/powercare/ )
83. More Readup
ATS blogs and others
DeveloperWorks
PartnerWorld
Services & Nigel Griffiths
Redbooks
Sales manual
Summary of Features & Functions
Videos – IBM System Lab
Notes de l'éditeur
Released March 2015
E870 SCM
E880 SCM
E850 DCM ^_^ - May 2015
Only S814 and S824 supports IBM i 7.1 or later
CBU available only on S814 and S824
Bare metal Ubuntu only on S812L and S824L
NEBS Level-3 and ETSI capability
- applicable to S822/S822L only
- designed for client that require hardened infra.
- designed for extreme shock, vibration, thermal conditions which exceed normal datacenter standard
- usually telco (ibm.com/power/solutions/industry/telco)
CBU - Its like DR /HA for IBM i
- To really help customer on saving cost
Before Apr 2015, S824L always includes one or two GPU
With GPU maximizes performance and efficiency for scientific, engineering, Java, Big Data analytics, and other technical computing workloads
If S824L was initially ordered without a GPU, then adding a GPU to server is not updated (Redbooks)
S824L support Linux from several distributors
Memory features supported : 16GB, 32GB, 64GB and 128GB
Additional PCIe Gen3 I.O expansion drawer : if more PCIe slots are required
Add the same model PCIe drawer that we can attach to enterprise models
Min of 1 and max of 2 . 2 drawers required 2 sockets
HMC or IVM can manage S824L
If both not being used, S824L can runs in bare-metal mode, means single partition owns all server resoruces and only uBuntu OS can be installed
The superscript a on bare-metal means some features on PowerVM are not supported , pls refer to redbook to find out more
More PCIe slots
Means more Ethernet ports, FC ports, IF ports and even more CXP adapters to EXP24S which in turn more disks
Biggest capacity disks offering for scale out models are following
- ESD8 1.1 TB 10 K RPM SAS SFF-3 DIsk Drive (IBM i) 18 IBM i
- ESD9 1.2 TB 10 K RPM SAS SFF-3 Disk Drive (AIX/Linux) 18 AIX, Linux
Split Backplane provides better performance bcos 2 SAS controllers are utilized
EJ0P has SSDs, thus ofc again better performance over EJ0N+EJ0S
This options also available for Linux model the 2U models
Uses different FC #EL3T , #EL3T+EL3V and #EL3U
Mdl 41A is the S814 model
Mdl 42A is the S824 model
Available for AIX / Linux / VIOS
Only S824 and S822 (2 socket based)
Create RAID with ET (using SAS RAID Array Manager)
4 sockets x 12 cores = 48 cores x 4 sytem nodes= total 192 cores configurable on S880
Minimum one, two for redundancy
FSP flexible service processor connection to respective system node
Clock to respective system node to provide time synchronization (optical cable)
Sample configuration by IBM
Only 7014-T42 or #0553 rack tested and certified to work with E870/E880 as of Oct 2014
only racks that uses with E870/E880 system nodes
42U rack 19" MES same-serial-number uses #
If different rack (IBM or nonIBM) is desired, work with IBM Service organization to confirm rack the needed strength, rigidity hole spacing, clearances and etc Chargeable De-racking feature #ER21 if client doesn’t want the 7014-T42 or #0553
PDU : power distribution unit
Use “De-racking” feature #ER21 if client doesn’t want the 7014-T42 or #0553 it was built/test in. (priced at zero)
Ibm i 7.1 onward
uBuntu only on S824L
This applies to POWER6 as well,
Upgrade POWER6 to POWER7+ then POWE7+ to POWER*
Additional 12 PCIe Gen3 slots can be attached to each system node
Thru optical cable pair (intra-rack 3.0m or inter-rack 10.0m)
On Server must have PCIe optical cable adapters
Generally, PCIe optical cable adapter can be in any PCIe adapter slots in the Power Server. However IBM advised to follow slot priorities. For this check on the respective redbooks of the POWER8 model. Example on S824L, use slot priorities 2, 8,4,6,1,7,3,5
4U in size,
Its full high PCIe slots
Slots are hot pluggable (using cassette type)
But modules not hot pluggable
812 model can hook up to half (one fan out module) of PCIe Gen3 I/O exp drawer
Most models
- In 2014, 1 fan out module is an invalid configuration
- In fact, each system node must connect to two IO drawer, no more or less.
Either completely without any IO drawer or 2 for each system nodes
No single drawer nor 3 drawers
If two system nodes, you cant connect to 3 drawers or 4 drawers
So you wonder after you attached those drawers, how many slots will be available
By default, each system node comes with 8 (x16 speed) PCIe Gen3 slots
Then we can add drawer to it
On single node, if a expansion drawer added, that means we will have 28 PCIe slots . 24 from drawer, 4 from node.
Take note why 4 from node, bcos 4 slots will have the CXP adapter with optical ports to connect drawer to system node
On two nodes, if a expansion drawer added, then we create redundancy by connecting same drawer from two nodes
Thus 36 slots, 24 from drawers, 6 from each node. Bcos now we will use only 2 slots from each node to connect to drawer
SOD Statement of direction (like roadmap)
AOC :CXP 16X Active Optical cables (AOC)
Example here
Left : single node to 2 drawers
Note: Each single blue/green/etc lines each depicts two physical AOC cables to each drawer for redundancy (availability).
Right: two nodes to 2 drawers
To eliminate single point of failure, two connection must go to each drawer from each system node.
AOC :CXP 16X Active Optical cables (AOC)
Example here
Left : single node to 2 drawers
Note: Each single blue/green/etc lines each depicts two physical AOC cables to each drawer for redundancy (availability).
Right: two nodes to 2 drawers
To eliminate single point of failure, two connection must go to each drawer from each system node.
Drawers can be in same or different rack as the system nodes.
If large numbers ofIO cables are attached to PCIe adapters, its good to have the IO drawers in different rack for cable management ease
SCU and HMC not shown for visual clarity
Additional 12 PCIe Gen3 slots can be attached to each system node
Thru optical cable pair (intra-rack 3.0m or inter-rack 10.0m)
On Server must have PCIe optical cable adapters
Generally, PCIe optical cable adapter can be in any PCIe adapter slots in the Power Server. However IBM advised to follow slot priorities. For this check on the respective redbooks of the POWER8 model. Example on S824L, use slot priorities 2, 8,4,6,1,7,3,5
With 2 sockets based , will have more PCIe slots
Additional 12 PCIe Gen3 slots can be attached to each system node
Thru optical cable pair (intra-rack 3.0m or inter-rack 10.0m)
On Server must have PCIe optical cable adapters
Generally, PCIe optical cable adapter can be in any PCIe adapter slots in the Power Server. However IBM advised to follow slot priorities. For this check on the respective redbooks of the POWER8 model. Example on S824L, use slot priorities 2, 8,4,6,1,7,3,5
Enterprise models
First from left: must have redundant AOC to each drawer
Second from left : invalid statement from IBM
Third from left : 3 drawers not valid
Fourth from left: each system nodes can only connect maximum two drawers
In other word, min/max will be two drawers each system node. No less no more
12x PCIe gen3 drawer bandwidth if far better than in POWER7+ 12x attached IO drawers
- POWER7+ all 10 or 20 PCIe Gen1 x8 slots share one GX++ bus. Each GX++ has theoretically max of 20GB/s bandwidth.
POWER8 each fan-out module has a theoretically max of 32GB/s to share across just 6 PCIe x16 or x8 slots.
So on drawer with 2 fan-out modules has max of 64GB/s (220% larger than POWER7 single dedicated GX++ slots)
In redbooks, already support this. But not sure when
The SFF bays only support GEN2, thus disk from server of GEN3 do not fit .
Don’t try pull out disks from S812 or S24 disks and insert into this drawer or vice versa. It doesn't FIT
This units were all along supported in POWER7 and POWER7+ servers
IOA – I/O adapter
C4040-252 test this.
Download redbooks - https://www-03.ibm.com/support/techdocs/atsmastr.nsf/5cb5ed706d254a8186256c71006d2e0a/74be24b51525ec07862578cd0059ca5a/$FILE/ATS_EXP24S_Deepdive_R06.pdf
With IBM i, EXP24S must be ordered as one set of 24 bays (mode 1)
Applications who uses this :
GPU accelerator cards
Heavy duty algorithm applications (aerodynamic analysis, big data analytics, radiotherapy treatments, scientific, weather forecast apps and etc)
Lots of flash drives : show next slide
Accelerators plugged into a PCIe slot to access the processor bus using low latency , high-speed protocol interface
Applications can have customized functions in field programmable gate arrays (FPGA) and enqueue its works on CAPI processor
From practical perspective, CAPI seen as additional processor in system. CAPI allows specialized hw to access main system memory and perform coherent communication with other processors in system.
ASIC : application-specific integrated circuit
Connect to a 16x speed PCIe
, and CAPP from POWER8 will communicate str8 with this fella
CAPI hw
CAPP protocol and PSL power service layer
PSL on the daughter card chip, CAPP is on POWER8 board
S822L plus with CAPI plus FlashSystem
IBM solution for Hadoop
IBM solution for noSQL
For those that have questions on different version of PCIe in terms of performance, here is a snippet of that
Logically, next generation is always double the bandwidth and speed
Physically they all look the same
Care ful though, in POWER8 scale out and enterprise, there are two options : full high and low profile
But when connect to POWER8, the design is slightly different
X16 means more lanes compared to x8
Power7+ only uses up to x8 slots
Enterprise model has only x16 speed PCIe Gen3 slots but off low profile type
Where else the scale out model has two speed x8 and x16 and it is in full high type
See next slide to see what are those LP PCIe adapters
Use this slide, incase student ask the compatibility issues
Thus maximum
1S – 8 cards x 128 = 1TB
2S – 16 cards x 128 = 2TB
812 – 1 socket x 8 card x 128 GB = 512GB
814 - 1 socket x 8 cards x 128 GB = 512 GB
822 – 2 socket x 8 cards x 64 GB = 1 TB (this model supports only 16GB/32GB/64GB) memory features only
822L -2 socket x 8 cards x 64 GB = 1 TB (this model supports only 16GB/32GB/64GB) memory features only
824 – 2 socket x 8 cards x 128 GB = 2 TB
E870 - 4 socket x 8 cards x 128 GB x 2 nodes = 8TB
E880 - 4 socket x 8 cards x 128 GB x 4 nodes = 16TB
Almost all models in POWER8 when order memory, it must comes in pairs except 1S models, which minimum is single 16GB DIMM is orderable.
Power7/7+ was 1066 MHz DDR2
No CoD or CUoD implemented on memory
cDIMMs custom dual inline memory modules
Chipkill is RAID of Memory
Each CDIMM has 10 dram module/chips
8 of dram module hold data
1 of dram module hold parity
1 of dram module works like spare
Time of writing, POWER8 supports only DDR3. SOD on DDR4
L4 cache within the memory buffer chip that reduces the memory latency for local access
to memory behind the buffer chip; the operation of L4 cache is not apparent to applications running on POWER8 processor
L4 cache is implemented together with the memory buffer in the CDIMM. Each memory buffer contains 16MB of L4 cache. On Power S824L , you can have up to 256MB of L4 cache
On E870 one node will have up to 32 CDIMMs, which means 512MB of L4 cache (32 CDIMMs * 16MB)
On L4 cache
Each core can carry upto 16MB
1 socket has 2 memory controllers
Each memory controllers connect to 4 memory channels
is a relative measure of performance of systems running the IBM i operating system. Performance in client environments may vary. The value is based on maximum configurations.
s a computer benchmark that evaluates the relative OLTP performance of servers based on IBM POWER microprocessors. It is published by IBM and derived from a model based on specific workloads, plus benchmarks from Transaction Processing Performance Council and SPEC. these are benchmark values
AIX commands, maybe Linux has this too. Not sure
Application benchmark running SAP
Source www.spec.org Standard Performance Evaluation Corporation
Whole idea to offload operation from CPU, and thus increase performance
Maximize performance and efficiency for all types of scientific, engineering, Java, big data analytics and other technical computing workloads
Allows multiple OS to simultaneously share PCIe adapter with little or no runtime involvement from hypervisor or other virtualization intermediary
- no need of VIOS
Assign a slice of single physical adapter to multiple LPAR through logical ports
Almost all oses with the correct SP support this, even IBM i 7.1 TR10 on ward
This feature is already avail on Power7+ 770/780 with proper sw/firmware levels with the copper twinaax and SR optical adapters
as long as firmware level 780 (POWER7) or SC820_067 for POWER8
- Some redbooks say S814 and S824 support this.
- E870/E880 supports this
- 770 and 780
Introduced 2011,
Policy-based VM placement
- to help improve usage
vm mobility with placement policies
to help reduce burden o IT staff in a simplified GUI
Management system
That manages existing virtualization deployments
Not for cloud definitely .thus different from IBM Cloud Manager nor IBM Orchestactor
Collect accurate performance information about VMs running on your POWER system. And then uses that information to identify bottlenecks, then if can resolve it . All in this features.
Best part , its part of the hypervisor now.
Background data collection can be used when GUI is not active
Can even drill down and view specific adapter, bus, or CPU usage
Find and display bottlenecks
Through simple dashboard that shows performance health of the system
Show workloads in real time
Which highlights possible problems or bottlenecks (overcommitted resourceS)
-firmware of 770 or 780 later is prerequisite of PowerVP
Install and runs on windows 7 workstation to communicate with PowerVM
Additional CPU consumption when AME is in use
AME is enabled on lpar, and a multiply factor value is configured.
The OS which is AIX will compress LPAR memory when it is not active, and return idle physical memory to pool. In which later this portion of memory transferred to another LPAR when it needs more memory
Complete listing of supported OS under PowerVM/PowerKVM on which models
https://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaam/liaamvirtoptions.htm
On powerlinux machines such as 812L and 822L can runs on either both PowerVM or PowerKVM.
Can also runs on bare-metal mode; if install with uBuntu. Means all hardware resources goes under uBuntu control, single partition system
S824L purely Ubuntu bare-metal mode
But somewhere i read, if system configured without GPU cards, PowerVm is supported (only for RHES and SuSE)
Else configured with GPU, must go with uBuntu
Other models only support PowerVM
Kimchi
Newest item on KVM menu
Simplified Management tool
Pls read up on IBM PowerVM Virtualization Introduction and Configuration - Jun 2013.pdf redbook
Live Partition Mobility.pdf
http://www.redbooks.ibm.com/abstracts/sg247460.html?Open
For POWER8 systems, IBM i requires IBM i 7.1 TR9 and IBM i 7.2 TR1 (redbook)
Each CDIMMs come with
2 ports
10 DRAM modules (8 for data, 1 for parity check, 1 for additiona spare
By default, depending on how DRAM modules fail, it might be possible to tolerate up to 4 DRAM modules failing on a single DIMM without needing to replace the DIMM.
Standard ECC measured at 91%, Chipkill at 99.94%
- These measurement collected at IBM development labs over 3 simulated 3 yrs workloads (on P720)
A good FFDC design mean error checking and coordination of faults so that faults are detected and the root cause of fault is isolated. All these is done without intervention by IBM SSR
Necessary Fault information will be collected at time of failure without needing to re-create the problem or run an extended tracing or diagnostic program
AMM
Its really meant to hold the hypervisor operational data.
In a failure of CDIMM where the hypervisor data is hosted, the hypervisor can become inoperative
AMM will write memory blocks that used by hypervisor in two distinct CDIMMs
Hypervisor generally uses from range of 256M to 300GB. Depending on number of lpars and features turn on servers
Was available on Power 795 then 770. never on midrange of entry level models. Same goes here
RSP
- Main component of system, responsible for IPL, setup, monitoring, control and management of system
Even on single system node has dual SP on the SCU
RSC
Responsible to perform time synchronization/clock signals sync for whole system. This is crucial to system operations
Is located in SCU, as two redundant system clock cards.
Even on single system node has dual SP on the SCU
DPS
Are CUoD capable. Requires available inactive CUoD processor cores
Activate inactive processor in event of processor failure in LPAR.
Happens automatically when use with DLPAR
DMS
Are CUoD capable.
Activating inactive memory in even of failed memory.
Self-service such as end users can re-deploy virtual servers with an easy to use interface
Complements IBM Systems Director VMControl
CoD generally applies on Enterprise models
Static CoD
Activations are permanent tied to a specific Power server
Not eligible to be relocated, Must remain on the specific server they were originally activated on
Mobile
- Thus Mobile activation must use of enterprise pool to move processor or memory between servers in defined pool
- Licenses are price higher than static due to fact mobile activations provide greater flexibility
Processor activations are ordered by increment of 1, memory are activate by increment of 100GB
Static activations can be converted into mobile through standard upgrade order at any time
Elastics are Activations are temporary also tied to specific server.
Purchased by usage and billed quarterly
IFL.
When ordered, 4 processors, 32GB memory and 4 PowerVM for linux is activated at very low cost.
Introduced in z system and brought over to p system
Whole idea is to provide lower-cost processor core activation for Linux workloads
Require functions of HMC enhanced and specific tech preview interface hmc v8.20.
Trial CoD. Evaluate use of inactive processor cores, memory of both at no charge . After it is started, trial period is available for 30 power-on days
Utility CoD
use it when you have unpredictable short workload spikes.
Provides additional processor on temporary basis within shared processor pool
Use is measured in processor minute increments and is reported at the Utility CoD website
Requires PowerVM enterprise Edition
Capacity BackUp (CBU)
Use with System i only
Provide an off-site, DR server using On/Off CoD capabilities
Once all servers in same pool, then we can move processors or memory (mixed clock speed) oh yea
And then we can perform LPM to migrate live running workloads between systems
Example :each servers comes with 16 cores
Activated only 4 from each, so total i would have 12 mixed processor clock/speed
With Mobile CoD, and PEP, i can simply say lets run a workload on the PowerE870 with all 12 cores activated there. So this makes the other 2 servers just as off-site DR machines
HMC v7.8
All HW & SW maintenance coverage must be consistent across pool
One pool to one hmc,
One hmc can manage multiple pool
HMC master
Stores the data included in the custeomer XML CoD configuration file which defines servers mobile activation
Controls where the activations are assigned,
Performs resource movement and is also aware of each server’s static resources
Second, redundant backup HMC can be configured for availability. This backup can only view operations
If MASTER HMC failed, running of server will not be impacted. Only cant move resources, perform all operations on servers within pool
HMC min version 7.8, 2GB of ram.
# lsdev –Cc adapter | grep –i roce
roce0 Available 00-00-00 PCIe2 10GbE RoCE Converged Network Adapter
hba0 Available 00-00-00 PCIe2 10GbE RoCE Converged Host Bus Adapter
There is no Redbook on RoCE over Power system
Mostly all covered in Redbook over z systems or flex systems
Main thing is FOC
Google for IBM ATS
Goto youtube.com subcribes to
Ibm system lab services
Nigel griffiths