SlideShare une entreprise Scribd logo
1  sur  55
Télécharger pour lire hors ligne
Power Blades Implementation


            Mike Schambureck with help from Janus Hertz
            schambur@us.ibm.com,
            IBM Systems Lab Services and Training




   STG Technical Conferences 2009                    © 2009 IBM Corporation
STG Technical Conferences 2009

Agenda
    Where to start an IBM i on blade implementation
    Hardware overview:
     – Power blade servers technical overview
     – New expansion adapters
     – BladeCenter S components and I/O connections
     – BladeCenter H components and I/O connections
     – Switch module portfolio
     – Expansion adapter portfolio for IBM i
    Virtualization overview
     – VIOS-based virtualization
     – IVM overview
     – Storage options for BladeCenter H and BladeCenter S
     – Multiple Virtual SCSI adapters
     – Virtual tape
     – Active Memory Sharing on blade
    4Q 2009 enhancements

2               Power Blades Implementation                  © 2009 IBM Corporation
STG Technical Conferences 2009

Where Do I Start with Installing IBM i on Blade?




     • Latest versions at: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

3        Power Blades Implementation                                      © 2009 IBM Corporation
STG Technical Conferences 2009




IBM BladeCenter JS23 Express

                                        2 sockets, 4 POWER6 cores @ 4.2 GHz
                                        Enhanced 65-nm lithography
                                        32 MB L3 cache per socket
                                        4 MB L2 cache per core
                                        8 VLP DIMM slots, up to 64 GB memory
                                        FSP-1 service processor
                                        2 x 1Gb embedded Ethernet ports (HEA)
                                        2 PCIe connectors (CIOv and CFFh)
                                        1 x onboard SAS controller
                                        Up to 1 SSD or SAS onboard disk
                                        EnergyScale™ power management
                                        PowerVM Hypervisor virtualization




4      Power Blades Implementation                             © 2009 IBM Corporation
STG Technical Conferences 2009


IBM BladeCenter JS23 Express




5      Power Blades Implementation      © 2009 IBM Corporation
STG Technical Conferences 2009


IBM BladeCenter JS43 Express

                                        4 sockets, 8 POWER6 cores @ 4.2 GHz
                                        Enhanced 65-nm lithography
                                        32 MB L3 cache per socket
                                        4 MB L2 cache per core
                                        16 VLP DIMM slots, up to 128 GB memory
                                        FSP-1 service processor
                                        4 x 1Gb embedded Ethernet ports (HEA)
                                        4 PCIe connectors (CIOv and CFFh)
                +                       1 x onboard SAS controller
                                        Up to 2 SSD or SAS onboard disks
                                        EnergyScale™ power management
                                        PowerVM Hypervisor virtualization




6      Power Blades Implementation                               © 2009 IBM Corporation
STG Technical Conferences 2009


IBM BladeCenter JS43 Express SMP Unit Only




7      Power Blades Implementation       © 2009 IBM Corporation
STG Technical Conferences 2009


IBM BladeCenter JS12
                       SAS disk drive                                 SAS disk drive
                                               8 DDR2 DIMMs
                                                 64 GB max




                                                                                   SAS Exp.
                                                                                    Adapter



1 socket x 2 cores
   @ 3.8 GHz

                                                P5IOC2 I/O chip
                                                 (2 HEA ports)
                                                                                PCI-X (CFFv)
                                                                                connections
                                                  Service Processor



                                                                        PCIe (CFFh)
                                                                        connection




8                Power Blades Implementation                                            © 2009 IBM Corporation
STG Technical Conferences 2009

IBM BladeCenter JS22

                                       4 DDR2 DIMMs        SAS disk drive
     2 sockets x 2 cores                 32 GB max
          @ 4 GHz



                                                                  SAS
                                                                Controller




                                        P5IOC2 I/O chip
                                         (2 IVE ports)
                                                              PCI-X (CFFv)
                                                              connections
                                       Service processor




                                                             PCIe (CFFh)
                                                             connection


9        Power Blades Implementation                                  © 2009 IBM Corporation
STG Technical Conferences 2009

CFFv and CFFh I/O Expansion Adapters
 Combination Form Factor (CFF) allows for 2
different expansion adapters on the same                                                                           HSSM1
blade                                                                                                              HSSM3



     CFFv (Combo Form Factor – Vertical)
        Connects to PCI-X bus to provide access to switch                             CFFX
                                                                                     CFFv                          SM3
       modules in bays 3 & 4                                                                              SerDes   SM4


             Vertical switch form factor                                                  PCI-X

             Supported for IBM i: SAS (#8250)

     CFFh (Combo Form Factor – Horizontal)                                                                         HSSM2
                                                                                                                   HSSM4

        Connects to PCIe bus to provide access to the switch                         CFFE
                                                                                    CFFh
       modules in bays 7 – 10
             Horizontal switch form factor, unless MSIM used
                                                                      PCI-Express
              Supported for IBM i: Fibre Channel and Ethernet
            (#8252)


                                                   Note: See IBM i on Power Blade Supported Environments for
                                                   hardware supported by IBM i:
                                                   http://www.ibm.com/systems/power/hardware/blades/ibmi.html

10               Power Blades Implementation                                                      © 2009 IBM Corporation
STG Technical Conferences 2009

CIOv and CFFh I/O Expansion Adapters
 Combination I/O Form Factor – Vertical is
available only on JS23 and JS43
       CFFv adapters not supported on JS23 and
      JS43
     CIOv
       Connects to new PCIe bus to provide access to switch
      modules in bays 3 & 4
             Vertical switch form factor
              Supported for IBM i: SAS passthrough (#8246),
            Fibre Channel (#8240, #8241, #8242)
             Can provide redundant FC connections

     CFFh
       Connects to PCIe bus to provide access to the switch
      modules in bays 7 – 10
             Horizontal switch form factor, unless MSIM used
              Supported for IBM i: Fibre Channel and Ethernet   Note: See IBM i on Power Blade Supported Environments for
            (#8252)                                             hardware supported by IBM i:
                                                                http://www.ibm.com/systems/power/hardware/blades/ibmi.html

11               Power Blades Implementation                                                        © 2009 IBM Corporation
STG Technical Conferences 2009


Meet the BladeCenter S – Front View
                        Service label cards slot enable quick and easy reference to
                        BladeCenter S
                                                                                               SAS and SATA disks can be
                                                                                               mixed

                                                                                               SAS disks recommended for IBM i
                                                                                               production

                                                                                               RAID 0, 1, 5, 0+1 supported with
                                                                                               RAID SAS Switch Module (RSSM)

                                                                                               Separate RAID arrays for IBM i
                                                                                               recommended



     7U




                                                                                Supports up to 6 BladeServers
 Shared USB ports
 and CD-RW / DVD-ROM
 Combo                                                                     Battery Backup Units for use only with RAID SAS
                                                                           Switch Module

12             Power Blades Implementation                                                                      © 2009 IBM Corporation
STG Technical Conferences 2009




Meet the BladeCenter S – Rear View
                                     Hot-swap Power Supplies 3 & 4 are optional,                     Hot-swap Power Supplies 1 & 2 are
                                     Auto-sensing b/w 950W / 1450W                                   standard, Auto-sensing b/w 950W /
                                                                                                     1450W

                                                                                                     Power supplies 3 and 4 required if using
                                                                                                     > 1 blade




     7U




                                                                                     Top: AMM standard
                                                                                     Bottom: Serial Pass-thru Module optional
     Four Blower modules
                 standard
                                                                         Top(SW1) & Bottom(SW2) left: Ethernet
                                                                         Top(SW3) & Bottom(SW4) right: SAS
                                                                         Both CIOv (#8246) and CFFv (#8250) adapters supported



13                 Power Blades Implementation                                                                          © 2009 IBM Corporation
STG Technical Conferences 2009


BladeCenter S Midplane - Blade to I/O Bay Mapping
                                                                                  AMM Bay

                             Blade                    “A”
                                                                                   I/O Bay 1
                             #1                                                                     Ethernet Bay
                                                      “B”
                           Blade #2
                          Blade #3
                        Blade #4
                      Blade #5                                                    I/O Bay 3
                    Blade #6                                                     ENet Switch
                                                                                 Fibre              SAS Switch Bay
                                                                                 SAS
                                                        “A”


                                                        “B”                      RAID Battery Bay
     PCI-X (CFFv) or PCIe (CIOv)                  D.C. Blade
                                                D.C. Blade
                                                  #1
         Blade Daughter Card                 D.C. Blade
                                                #2
     eNet, Fibre, SAS, SAS RAID            D.C. Blade
                                             #3                                    I/O Bay 4
                                           #4
                                        D.C. Blade                               ENet Switch
                                        #5
                                      D.C. Blade                                 Fibre
                                      #6                                         SAS
                                                                                                    SAS Switch Bay

                                                        “A”
                                                                                 RAID Battery Bay

                                                        “B”
                                                   C.C. Blade                      I/O Bay 2
               PCI-E (CFFh)                     C.C. Blade
                                                   #1
            Blade Daughter Card               C.C. Blade
                                                #2                                                  Option Bay
                                           C.C. Blade
                                              #3
                                         C.C. Blade
                                           #4
                                      C.C. Blade
                                         #5
                                      #6
                                                                BC-S Mid-Plane

14        Power Blades Implementation                                                                      © 2009 IBM Corporation
STG Technical Conferences 2009


BladeCenter H - front view
                                             Power
                                            Module 3
       Power                                 Filler
      Module 1
      and Fan
        pack                                   Front
                                              System
        HS20                                   Panel
      Blade # 1


        9U                                  CD DVD- drive


     Blade Filler
                                            Front USB


      Power                                  Power
     Module 2                               Module 4
      Filler                                and Fan
                                              pack




15         Power Blades Implementation            © 2009 IBM Corporation
STG Technical Conferences 2009

IBM BladeCenter H - Rear View
                                                                        • Multi-Switch Interconnect Module
                                                                        • Ethernet switch (left side bay 9)
                                                     I/O module bay
                                                         7 and 8        • Fibre Channel switch (right side bay 10)


                   Power                                                                 Power
                 Connector 2                                                           Connector 1
 Ethernet                                                                                           SAS or
                                                                                        I/O Module
  switch      I/O Module bay 1
                                                                                           bay 3     Fibre
              I/O Module bay 5                                                             Advanced Channel
                                                                                         Management module
            Blower Module 1                                                                Module 1
                 and 2                                                                   Advanced
 Ethernet                                                                               Management
  switch      I/O Module bay 2                                                          Module 2 slot
              I/O Module bay 6                                                           I/O Module
                                                                                            bay 4
        Rear LED panel
          and Serial
          connector               Left Shuttle                                  Right Shuttle release lever
                                 release lever   I/O module bay 9 and 10
                                                 • Multi-Switch Interconnect Module
                                                 • Ethernet switch (left side bay 9)
                                                 • Fibre Channel switch (right side bay 10)
16               Power Blades Implementation                                                     © 2009 IBM Corporation
STG Technical Conferences 2009

 BCH: CFFv and CFFh I/O Connections

                        Blade #N                   On-Board Dual
                                                                       Switch #1
                                                   Gbit Ethernet       Ethernet

                                         On-Board Dual
                                         Gbit Ethernet             M
                                                                   I   Switch #2
       POWER                             SAS CFFv                      Ethernet
                                         Expansion Card            D
       Blade Server #1
                                                                   P
                                                                   L   Switch #3
                                       QLogic CFFh                 A
                                       Expansion Card
                                                                   N   Switch #4
                                                                   E
QLogic CFFh Expansion Card:                                            Switch #7
• Provides 2 x 4Gb Fibre Channel connections to SAN
• 2 Fibre Channel ports externalized via Switch 8 & 10
• Provides 2 x 1 Gb Ethernet ports for additional networking
• 2 Ethernet ports externalized via Switch 7 & 9
                                                                       Switch #8
SAS CFFv Expansion Card:
• Provides 2 SAS ports for connection to SAS tape drive                Switch #9
• 2 SAS ports externalized via Switch 3 & 4
                                                                       Switch #10
17              Power Blades Implementation                                  © 2009 IBM Corporation
STG Technical Conferences 2009

 BCH: CIOv and CFFh I/O Connections

                   Blade #N                   On-Board Dual
                                                                  Switch #1
                                              Gbit Ethernet       Ethernet

                                    On-Board Dual
                                    Gbit Ethernet             M
                                                              I   Switch #2
     POWER                          CIOv Expansion                Ethernet
                                    Card                      D
     Blade Server #1
                                                              P
                                                              L   Switch #3
                                   QLogic CFFh                A
                                   Expansion Card
                                                              N   Switch #4
CIOv Expansion Card:
                                                              E
                                                                  Switch #7
• 2 x 8Gb or 2 x 4Gb Fibre Channel
• OR, 2 x 3Gb SAS passthrough
• Uses 4Gb or 8Gb FC vertical switches in bays 3 & 4              Switch #8
• OR, 3Gb SAS vertical switches in bays 3 & 4
• Redundant FC storage connection option for IBM i
CFFh Expansion Card:                                              Switch #9
• 2 x 4Gb and 2 x 1Gb Ethernet
                                                                  Switch #10
18            Power Blades Implementation                                © 2009 IBM Corporation
STG Technical Conferences 2009



     BladeCenter Ethernet I/O Modules
      Nortel Layer 2/3 Gb              Cisco Systems              Nortel L2-7 GbE Switch               Nortel L2/3 10GbE
       Ethernet Switch             Intelligent Gb Ethernet                Module                      Uplink Switch Module
           Modules                     Switch Module




       Copper Pass-Through                      Nortel 10Gb Ethernet                          Intelligent Copper
             Module                               Switch Module                             Pass-Through Module




                                                   Note: See IBM i on Power Blade Supported Environments for hardware
                                                   supported by IBM i:
                                                   http://www.ibm.com/systems/power/hardware/blades/ibmi.html

19                Power Blades Implementation                                                                    © 2009 IBM Corporation
STG Technical Conferences 2009



     BladeCenter Fibre Channel I/O Modules
      Cisco 4Gb 10 and 20           Brocade 4Gb 10 and             QLogic 8Gb 20 port                  QLogic 4Gb 10 and 20
       port Fibre Channel          20 port Fibre Channel          Fibre Channel Switch                  port Fibre Channel
        Switch Modules                Switch Modules                     Module                           Switch Module




                  Brocade Intelligent 8Gb                                      Brocade Intelligent 4Gb
                  Pass-Thru Fibre Channel                                      Pass-Thru Fibre Channel
                      Switch Module                                                Switch Module




                                                Note: See IBM i on Power Blade Supported Environments for hardware
                                                supported by IBM i:
                                                http://www.ibm.com/systems/power/hardware/blades/ibmi.html

20                Power Blades Implementation                                                                 © 2009 IBM Corporation
STG Technical Conferences 2009



     BladeCenter SAS I/O Modules
       BladeCenter S SAS
        RAID Controller
            Module

                                                                 • Supported only in BladeCenter S
                                                                 • RAID support for SAS drives in chassis
                                                                 • Supports SAS tape attachment
                                                                 • No support for attaching DS3200
                                                                 • 2 are always required




        BladeCenter SAS
        Controller Module

                                                                 • Supported in BladeCenter S and BladeCenter H
                                                                 • No RAID support
                                                                 • Supports SAS tape attachment
                                                                 • Supports DS3200 attachment



                                             Note: See IBM i on Power Blade Supported Environments for hardware
                                             supported by IBM i:
                                             http://www.ibm.com/systems/power/hardware/blades/ibmi.html

21             Power Blades Implementation                                                                 © 2009 IBM Corporation
STG Technical Conferences 2009




SAS RAID Controller Switch Module
     RAID controller support provides additional protection options for BladeCenter
     S storage

     SAS RAID Controller Switch Module
       –    High-performance, fully duplex, 3Gbps speeds
       –    Support for RAID 0, 1, 5, & 10
       –    Supports 2 disk storage modules with up to 12 SAS drives
       –    Supports external SAS tape drive
       –    Supports existing #8250 CFFv SAS adapter on blade
       –    1GB of battery-backed write cache between the 2 modules
       –    Two SAS RAID Controller Switch Modules (#3734) required


     Supports Power and x86 Blades
       –    Recommend separate RAID sets
                 –     For each IBM i partition
                 –     For IBM i and Windows storage
       –    Requirements
                 –     Firmware update for SAS RAID Controller Switch Modules
                 –     VIOS 2.1.1, eFW 3.4.2




                                               Note: Does not support connection to DS3200
                                               IBM i is not pre-installed with RSSM configurations

22               Power Blades Implementation                                                         © 2009 IBM Corporation
STG Technical Conferences 2009


 Multi-switch Interconnect Module for BCH
                                         • Installed in high-speed bays 7 & 8
                                         and/or 9 & 10

                                         • Allows a “vertical” switch to be
         MSIM
                                         installed and use the “horizontal” high-
                                         speed fabric (bays 7 – 10)

                                         • High-speed fabric is used by CFFh
                                         expansion adapters

                                         • Fibre Channel switch module must be
                                         installed in right I/O module bay (switch
                                         bay 8 or 10)

                                         • If additional Ethernet networking
                                         required additional Ethernet switch
                                         module can be installed in left I/O
                                         module bay (switch bay 7 or 9)

23      Power Blades Implementation                                    © 2009 IBM Corporation
STG Technical Conferences 2009


 I/O Expansion Adapters




              #8252 QLogic Ethernet and 4Gb Fibre #8250 LSI 3Gb SAS Dual
                Channel Expansion Card (CFFh)     Port Expansion Card (CFFv)




   #8246 3Gb SAS       #8240 Emulex 8Gb       #8242 QLogic 8Gb      #8241 QLogic 4Gb
Passthrough Expansion     Fibre Channel         Fibre Channel         Fibre Channel
     Card (CIOv)      Expansion Card (CIOv) Expansion Card (CIOv) Expansion Card (CIOv)

                                            Note: See IBM i on Power Blade Supported Environments for hardware
                                            supported by IBM i:
                                            http://www.ibm.com/systems/power/hardware/blades/ibmi.html

 24           Power Blades Implementation                                                                 © 2009 IBM Corporation
STG Technical Conferences 2009




               Virtualization Overview




25   Power Blades Implementation         © 2009 IBM Corporation
STG Technical Conferences 2009

 VIOS, IVM and i on Power Blade
                                                Linux              AIX                    VIOS = Virtual I/O Server =
                                                Client            Client                virtualization software in a partition

      HEA                  HEA                HEA              HEA                        Does not run other applications
                                                                                          First LPAR installed on blade
                                                                                          VIOS owns physical hardware (Fibre
                               CFFh FC                          USB
                                              and/or
      CFFv SAS                 exp card                                                 Channel, Ethernet, DVD, SAS)
      exp card                                           SAS           HEA
                      or              CIOv FC
                                      exp card             or                             VIOS virtualizes disk, DVD,
           CIOv SAS
            exp card
                                 VIOS / IVM               SSD
                                                                                        networking, tape to i partitions
         SAS Switch               FC Switch                                               IVM = Integrated Virtualization
                                                                                        Manager = browser interface to manage
                                                 DS3400                     LAN
                                                                DVD
                                                 DS4700                                 partitions, virtualization
                  DS3200*                        DS4800
                                                               IVM / Virtual Op Panel
 SAS-attached                                    DS8100                                   IVM installed with VIOS
LTO4 tape drive                                  DS8300
 (virtual tape)
                                                  SVC                                     i uses LAN console through Virtual
                                                                AMM / LAN Console
                                                                                        Ethernet bridge in VIOS
 * Not supported with RSSM


 26                        Power Blades Implementation                                                               © 2009 IBM Corporation
STG Technical Conferences 2009

Integrated Virtualization Manager (IVM) Introduction




     Browser-based interface, supports Mozilla Firefox and Internet Explorer
     Part of VIOS, no extra charge or installation
     Performs LPAR and virtualization management on POWER6 blade



27              Power Blades Implementation                                    © 2009 IBM Corporation
STG Technical Conferences 2009

IVM Example: Create i Partition




                                           Fewer steps than HMC
                                           IVM uses several defaults
                                           Virtual I/O resources only for
                                         IBM i partitions

28      Power Blades Implementation                           © 2009 IBM Corporation
STG Technical Conferences 2009

Storage, Tape and DVD for i on JS12/JS22 in BCH
                               MSIM with Fibre Channel
                                  I/O module inside                               VIOS Host                     i Client
     Fibre Channel
        Storage                                                                    hdiskX LUNs                    DDxx
                                                                            CFFh
                                   Fibre Channel




                                                     BladeCenter midplane
                                    I/O module
                                                                                                 Virtual SCSI
                                                                                                 connection
     SAS Storage and/or tape

 DS3200
                                                                            CFFv                 Virtual SCSI
                                SAS I/O module                                                   connection
      TS2240
                                                                            USB                             OPTxx
                                                                            /dev/cd0                                  DVD
                                                                                        DVD
                                  Media tray
                                                                                           Power Blade
     With BCH and JS12/JS22, IBM i can use:
        Fibre Channel storage (MSIM, FC module and CFFh adapter required)
         SAS storage (SAS module and CFFv adapter required)
         SAS tape (SAS module and CFFv adapter required)
        USB DVD in BladeCenter
     Physical I/O resources are attached to VIOS, assigned to IBM i in IVM
        Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used
29                   Power Blades Implementation                                                                           © 2009 IBM Corporation
STG Technical Conferences 2009

Storage, Tape and DVD for i on JS23/JS43 in BCH
                               MSIM with Fibre Channel
                                  I/O module inside                               VIOS Host                     i Client
     Fibre Channel
        Storage                                                                    hdiskX LUNs                    DDxx
                                                                            CFFh
                                   Fibre Channel




                                                     BladeCenter midplane
                                    I/O module
                                                                                                 Virtual SCSI
                                                                                                 connection
                                                                            CIOv
     SAS Storage and/or tape
                                                                            OR

 DS3200                                                                     CIOv                 Virtual SCSI
                                SAS I/O module                                                   connection
     TS2240
                                                                            USB                             OPTxx
                                                                            /dev/cd0                                  DVD
                                                                                        DVD
                                  Media tray
                                                                                          Power Blade

     With BCH and JS23/JS43, IBM i can use:
         Fibre Channel storage (MSIM, FC module and CFFh adapter required; or FC module and CIOv adapter required)
            Redundant FC adapters can be configured (CFFh and CIOv)
         SAS storage (SAS module and CIOv adapter required)
         SAS tape (SAS module and CIOv adapter required)
        USB DVD in BladeCenter
     Physical I/O resources are attached to VIOS, assigned to IBM i in IVM
        Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used

30                   Power Blades Implementation                                                                           © 2009 IBM Corporation
STG Technical Conferences 2009

Storage, Tape and DVD for i on JS12/JS22 in BCS
      SAS drives in BCS            Non-RAID SAS                                       VIOS Host               IBM i Client
                                 module in I/O Bay 3/4

                                                                                       hdiskX LUNs                  DDxx




                                                         BladeCenter midplane
                                                                                                     Virtual SCSI
                                                                                                     connection

                                  RAID SAS module                               SAS
     TS2240                        in I/O Bay 3 & 4                             CFFv


DS3200                                                                                               Virtual SCSI
                                                                                                     connection
                                                                                USB
                                                                                                                OPTxx
                                                                                /dev/cd0    DVD
                                                                                                                        DVD
                                  Media tray
                                                                                                  Power Blade

       With BCS and JS12/JS22, IBM i can use:
            SAS storage (SAS module and CFFv adapter required)
              SAS tape (SAS module and CFFv adapter required)
            USB DVD
       Drives in BCS, TS2240, DS3200 supported with Non-RAID SAS Switch Module (NSSM)
       Only drives in BCS and TS2240 supported with RAID SAS Switch Module (RSSM)
       Physical I/O resources are attached to VIOS, assigned to IBM i in IVM
            Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used
31                  Power Blades Implementation                                                                          © 2009 IBM Corporation
STG Technical Conferences 2009

Storage, Tape and DVD for i on JS23/JS43 in BCS
      SAS drives in BCS            Non-RAID SAS                                        VIOS Host              IBM i Client
                                 module in I/O Bay 3/4

                                                                                       hdiskX LUNs                  DDxx




                                                         BladeCenter midplane
                                                                                                     Virtual SCSI
                                                                                                     connection
                                  RAID SAS module                               SAS
     TS2240                        in I/O Bay 3 & 4                             CIOv


DS3200
                                                                                                     Virtual SCSI
                                                                                                     connection
                                                                                USB
                                                                                                                OPTxx
                                                                                /dev/cd0    DVD
                                                                                                                        DVD
                                  Media tray
                                                                                                  Power Blade

       With BCS and JS23/JS43, IBM i can use:
            SAS storage (SAS module and CIOv adapter required)
              SAS tape (SAS module and CIOv adapter required)
            USB DVD
       Drives in BCS, TS2240, DS3200 supported with Non-RAID SAS Switch Module (NSSM)
       Only drives in BCS and TS2240 supported with RAID SAS Switch Module (RSSM)
       Physical I/O resources are attached to VIOS, assigned to IBM i in IVM
            Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used
32                  Power Blades Implementation                                                                          © 2009 IBM Corporation
STG Technical Conferences 2009

Storage and Tape Support
     Storage support
       – BladeCenter H and JS12/JS22/JS23/JS43:
           – SAS – DS3200
           – Fibre Channel – DS3400, DS4700, DS4800, DS8100, DS5020, DS5100,
             DS5300, XIV, DS8300, DS8700, SVC
                – Multiple storage subsystems supported with SVC

       – BladeCenter S and JS12/JS22/JS23/JS43:
           – SAS – BCS drives; DS3200 (only with NSSM)

     Tape support
       – BladeCenter H and BladeCenter S:
           – TS2240 LTO-4 SAS – supported for virtual tape and for VIOS backups
           – TS2230 LTO-3 SAS – not supported for virtual tape, only for VIOS backups
       – NEW support for Fibre Channel tape library support announced 20/10/2009!
           – Enables access to tape libraries 3584 (TS3500) and 3573 (TS3100 and
             TS3200)
           – Requires selected 8GB Fibre Channel Adapters


33            Power Blades Implementation                                    © 2009 IBM Corporation
STG Technical Conferences 2009


Configuring Storage for IBM i on Blade
     Step 1: Perform sizing
      – Use Disk Magic, where applicable
      – Use the PCRM, Ch. 14.5 – http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html
      – Number of physical drives is still most important
      – VIOS itself does not add significant disk I/O overhead
      – For production workloads, keep each i partition on a separate RAID array


     Step 2: Use appropriate storage UI and Redbook for your environment to create LUNs for IBM i
     and attach to VIOS (or use TPC or SSPC where applicable)




 Storage Configuration         DS Storage Manager for       DS8000 Storage Manager   SVC Console for
 Manager for NSSM and         DS3200, DS3400, DS4700,       for DS8100 and DS8300        SVC
        RSSM                          DS4800

34               Power Blades Implementation                                            © 2009 IBM Corporation
STG Technical Conferences 2009

Configuring Storage for IBM i on Blade, Cont.
     Step 3: Assign LUNs or physical drives in BCS to IBM i
      – ‘cfgdev’ in VIOS CLI necessary to detect new physical volumes if VIOS is running
      – Virtualize whole LUNs/drives (“physical volumes”) to IBM i
      – Do not use storage pools in VIOS




35               Power Blades Implementation                                           © 2009 IBM Corporation
STG Technical Conferences 2009

Multiple Virtual SCSI Adapters for IBM i
     Since VIOS 2.1 in November 2008, IBM i is no longer limited to 1 VSCSI connection to VIOS
     and 16 disk + 16 optical devices

     What IVM will do:
      – Create 1 VSCSI server adapter in VIOS for each IBM i partition created
      – Create 1 VSCSI client adapter in IBM i and correctly map to Server adapter
      – Map any disk and optical devices you assign to IBM i to the first VSCSI server adapter in
        VIOS
      – Create a new VSCSI server-client adapter pair only when you assign a tape device to IBM
        i
      – Create another VSCSI server-client adapter pair when you assign another tape device


     What IVM will not do:
      – Create a new VSCSI server-client adapter pair if you assign more than 16 disk devices to
        IBM i




36              Power Blades Implementation                                          © 2009 IBM Corporation
STG Technical Conferences 2009

Multiple Virtual SCSI Adapters for IBM i, Cont.
     Scenario I: you have <=16 disk devices and you want to add virtual tape
      – Action required in VIOS:
         – In IVM, click on tape drive, assign to IBM i partition
             – Separate VSCSI server-client adapter pair created automatically

     Scenario II: you have 16 disk devices and you want to add more disk and virtual tape
      – Actions required in VIOS:
         – In VIOS CLI, create new VSCSI client adapter in IBM i
             – VSCSI server adapter in VIOS created automatically
         – In VIOS CLI, map new disk devices to new VSCSI server adapter using ‘mkvdev’
         – In IVM, click on tape drive, assign to IBM i partition

     For details and instructions, see IBM i on Blade Read-me First:
     http://www.ibm.com/systems/power/hardware/blades/ibmi.html




37              Power Blades Implementation                                       © 2009 IBM Corporation
STG Technical Conferences 2009


IBM i Support for Virtual Tape
     Virtual tape support enables IBM i partitions to directly backup to
     PowerVM VIOS attached tape drive saving hardware costs and
     management time

     Simplifies backup and restore processing with BladeCenter
     implementations
      – IBM i 6.1 partitions on BladeCenter JS12, JS22, JS23, JS43
      – Supports IBM i save/restore commands & BRMS
      – Supports BladeCenter S and H implementations

     Simplifies migration to blades from tower/rack servers
      – LTO-4 drive can read backup tapes from LTO-2, 3, 4 drives

     Supports IBM Systems Storage SAS LTO-4 Drive
      – TS2240 SAS for BladeCenter ONLY
      – Fibre Channel attached tape libraries 3584 (TS3500) and
         3573 (TS3100 and TS3200)

     Requirements
      – VIOS 2.1.1, eFW 3.4.2, IBM i 6.1 PTFs




38              Power Blades Implementation                                © 2009 IBM Corporation
STG Technical Conferences 2009

 Virtual Tape Hardware and Virtualization

                                                                           VIOS Host               IBM i Client
     SAS-attached LTO4      SAS I/O module




                                             BladeCenter midplane
     tape drive (TS2240)
                                                                    CFFv
                                                                    SAS

                                                                    OR
                                  OR
                                                                            /dev/rmt0                   TAP01
                              RAID SAS I/O                          CIOv                 Separate
                                                                    SAS                                 3580 004
                                module                                                   Virtual SCSI
                                                                                         connection



                                                                                        Power Blade

 TS2240 LTO4 SAS tape drive attached to SAS switch in BladeCenter:
 – NSSM or RSSM in BCS (shown above)
 – NSSM in BCH
 Fibre Channel attached tape libraries 3584 (TS3500) and 3573 (TS3100 and TS3200) in BC-H
 VIOS virtualizes tape drive to IBM i directly
 Tape drive assigned to IBM i in IVM
 Tape drive available in IBM i as TAPxx, type 3580 model 004

39                   Power Blades Implementation                                                                   © 2009 IBM Corporation
STG Technical Conferences 2009


Assigning Virtual Tape to IBM i




     No action required in IBM i to make tape drive available
     – If QAUTOCFG is on (default)



40           Power Blades Implementation                        © 2009 IBM Corporation
STG Technical Conferences 2009


Migrating IBM i to Blade

     Virtual tape makes migration to blade similar to migration to tower/rack server:
      – On existing system, go save option 21 on tape media
      – On blade, use virtual tape to perform D-mode IPL and complete restore
      – Existing system does not have to be at IBM i 6.1
         – Previous-to-current migration also possible

     IBM i partition saved on blade can be restored on tower/rack server
      – IBM i can save to tape media on blade


     For existing servers that do not have access to tape drive, there are two options:
      – Save on different media, convert to supported tape format as a service, restore from
        tape
      – Use Migration Assistant method




41               Power Blades Implementation                                            © 2009 IBM Corporation
STG Technical Conferences 2009


Networking on Power Blade

                                                                  VIOS Host                         i Client
                            Ethernet
                          I/O module
                                                               Embedded Ethernet




                                       BladeCenter midplane
                                                               ports on blade
                                                                      10.10.10.35
                                                               IVE
         Local PC for:                                        (HEA)                                 CMN01
                         10.10.10.20
         AMM browser                                                                  Virtual LAN     LAN console
         IVM browser                                                                  connection
         LAN console
                         LAN                                   IVE
                                                                       Virtual                                10.10.10.37
                                                                       Ethernet                     CMN02
                                                              (HEA)
                                                                       bridge
           IVE
                                                                                                    Production
                                                                                                    interface
                                                                                                            10.10.10.38
         10.10.10.5                                                                 Power Blade

     VIOS is accessed from local PC via embedded Ethernet ports on blade (IVE/HEA)
        For both IVM browser and VIOS command line
          Same PC can be used to connect to AMM and for LAN console for IBM i
     For i connectivity, IVE/HEA port is bridged to Virtual LAN



42            Power Blades Implementation                                                                                 © 2009 IBM Corporation
STG Technical Conferences 2009


LAN Console for i on Power Blade

       Required for i on Power
     blade
       Uses System i Access
     software on PC (can use
     same PC for IVM connection)




                                                Full console functionality

                                                Uses existing LAN console
                                               capability




43            Power Blades Implementation                                    © 2009 IBM Corporation
STG Technical Conferences 2009

PowerVM Active Memory Sharing
     PowerVM Active Memory Sharing is an advanced memory virtualization                                   Around the World
                                                                                                 15
     technology which intelligently flows memory from one partition to
     another for increased utilization and flexibility of memory usage




                                                                             Memory Usage (GB)
                                                                                                 10
                                                                                                                                       Asia
                                                                                                                                       Americas
     Memory virtualization enhancement for Power Systems                                                                               Europe
                                                                                                 5
      – Partitions share a pool of memory
      – Memory dynamically allocated based on partition’s workload                               0
         demands                                                                                                 Time



     Extends Power Systems Virtualization Leadership                                                      Day and Night
                                                                                                  15
      – Capabilities not provided by Sun and HP virtualization offerings




                                                                           Memory Usage (GB)
                                                                                                  10
                                                                                                                                       Night
     Designed for partitions with variable memory requirements
                                                                                                                                       Day
      – Workloads that peak at different times across the partitions                                  5

      – Active/inactive environments
                                                                                                      0
      – Test and Development environments                                                                         Time
      – Low average memory requirements
                                                                                                          Infrequent Use
                                                                                                  15
     Available with PowerVM Enterprise Edition                                                                                         #10
                                                                                                                                       #9
      – Supports AIX 6.1, i 6.1, and SUSE Linux Enterprise Server 11



                                                                           Memory Usage (GB)
                                                                                                                                       #8
                                                                                                  10
                                                                                                                                       #7
      – Partitions must use VIOS and shared processors                                                                                 #6
                                                                                                                                       #5
      – POWER6 processor-based systems                                                                5
                                                                                                                                       #4
                                                                                                                                       #3
                                                                                                                                       #2
                                                                                                      0
                                                                                                                  Time                 #1



44             Power Blades Implementation                                                                                 © 2009 IBM Corporation
STG Technical Conferences 2009

IVM Example: Working with AMS




45     Power Blades Implementation      © 2009 IBM Corporation
STG Technical Conferences 2009


Enhancements for IBM i and Power Blades
     N_Port ID Virtualization (NPIV) Support for IBM i
      – Provides direct Fibre Channel connections from client partitions to SAN resources
      – Simplifies the management of Fibre Channel SAN environments
      – Enables access to Fibre Channel tape libraries
      – Supported with PowerVM Express, Standard, and Enterprise Edition
      – Power blades with an 8Gb PCIe Fibre Channel Adapter




                                                VIOS

                                             FC Adapter   Virtual FC Adapter   Virtual FC Adapter


                                                            Power Hypervisor




46             Power Blades Implementation                                                   © 2009 IBM Corporation
STG Technical Conferences 2009

                   Virtual SCSI                                         NPIV
           IBM i
          generic                   generic                   IBM i
          scsi disk                 scsi disk                   EMC


                                      SCSI                      FCP
                   VIOS                                                           VIOS


                          FC HBAs                                       FC HBAs




                          SAN                                             SAN




            DS5000                  SVC
The VSCSI model for sharing storage resources is      With NPIV, the VIOS' role is fundamentally different.
storage virtualizer. Heterogeneous storage is         The VIOS facilitates adapter sharing only, there is
pooled by the VIOS into a homogeneous pool of         no device level abstraction or emulation. Rather
block storage and then allocated to client LPARs in   than a storage virtualizer, the VIOS serving NPIV is
the form of generic SCSI LUNs. The VIOS performs      a passthrough, providing an FCP connection from
SCSI emulation and acts as the SCSI target.           the client to the SAN.
  47               Power Blades Implementation                                           © 2009 IBM Corporation
STG Technical Conferences 2009



Additional 4Q Enhancements for IBM i on Blade
     Support for IBM i (through VIOS) and AIX for CFFh
     1Gb Eth/8Gb FC combo card
      – Supported on JS12, JS22, JS23, JS43
      – Only adapter with NPIV support for JS12 and
        JS22                                              QLogic 1Gb Ethernet and 8Gb Fibre
                                                           Channel Expansion Card (CFFh)
      – FC ports supported only, not Ethernet




     Converged Network Adapter with support for 10Gb
     Ethernet and 8Gb FC (FC over Ethernet)
      – FC support for IBM i is with VSCSI only
      – NPIV not supported

                                                         10 GbE/8Gb FC Converged Network
                                                                  Adapter (CFFh)



48              Power Blades Implementation                                    © 2009 IBM Corporation
STG Technical Conferences 2009
                                          IBM i and BladeCenter S

System & Metode, Denmark
 www.system-method.com

 • IBM Business Partner
 • Software Solutions & Hosting company
        Focuses on very small / old existing installations
 • 1 BladeCenter S chassis
 • 1 JS12 POWER6 blade
 • 2 HS21 x86 blades

 • Provides hosting services to several clients/companies
 •1 IBM Virtual IO Server 2.1 (VIOS) host LPAR
 •3 IBM I 6.1 client LPARs – for different customers

 Pros:
 • Cheap hardware compared to traditional Power servers
 • Possible to get customers that would potentially have switched to the “dark side…”
 • Flexible
 Cons:
 • Complex, requires three different skills sets (Blade, VIOS, IBM i)
 • Difficult backup in early stages (2 step process). Now great with virtual tape.
49          Power Blades Implementation                                 © 2009 IBM Corporation
STG Technical Conferences 2009




IBM Systems Lab Services Virtualization Program
     What is it?
      – Free presales technical assistance from Lab Services
      – Help with virtualization solutions:
         – Open storage
         – Power blades
         – IBM Systems Director VMControl
         – Other PowerVM technologies
      – Design solution, hold Q&A session with client, verify hardware configuration

     Who can use it?
      – IBMers, Business Partners, clients

     How do I use it?
      – Contact Lab Services for nomination form; send form in
      – Participate in assessment call with Virtualization Program team
      – Work with dedicated Lab Services technical resource to design solution before
        the sale




50        Power Blades Implementation                                         © 2009 IBM Corporation
STG Technical Conferences 2009


Service Voucher for IBM i on Power Blade




     • Let IBM Systems Lab Services and Training help you install i on blade!
     • 1 service voucher for each Power blade AND IBM i license purchased
     • http://www.ibm.com/systems/i/hardware/editions/services.html
51           Power Blades Implementation                                © 2009 IBM Corporation
STG Technical Conferences 2009




Further Reading

      IBM i on Blade Read-me First:
      http://www.ibm.com/systems/power/hardware/blades/ibmi.html
      IBM i on Blade Supported Environments:
      http://www.ibm.com/systems/power/hardware/blades/ibmi.html
      IBM i on Blade Performance Information:
      http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html
      Service vouchers:
      http://www.ibm.com/systems/i/hardware/editions/services.html
      IBM i on Blade Training:
      http://www.ibm.com/systems/i/support/itc/educ.html




52     Power Blades Implementation                            © 2009 IBM Corporation
STG Technical Conferences 2009

Trademarks and Disclaimers
8 IBM Corporation 1994-2007. All rights reserved.
References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at
http://www.ibm.com/legal/copytrade.shtml.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered
    trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce.
ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual
environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does
not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information,
including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or
any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance,
function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here
to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any
user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage
configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements
equivalent to the ratios stated here.

Prices are suggested U.S. list prices and are subject to change without notice. Starting price may not include a hard drive, operating system or other features. Contact
your IBM representative or Business Partner for the most current pricing in your geography.

Photographs shown may be engineering prototypes. Changes may be incorporated in production models.


53                       Power Blades Implementation                                                                                                 © 2009 IBM Corporation
STG Technical Conferences 2009




Special notices
 This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in
 other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM
 offerings available in your area.
 Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions
 on the capabilities of non-IBM products should be addressed to the suppliers of those products.
 IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give
 you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY
 10504-1785 USA.
 All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives
 only.
 The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or
 guarantees either expressed or implied.
 All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the
 results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations
 and conditions.
 IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions
 worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment
 type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal
 without notice.
 IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
 All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
 IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
 Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are
 dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this
 document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-
 available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document
 should verify the applicable data for their specific environment.



                                                                                                                           Revised September 26, 2006


54                    Power Blades Implementation                                                                                     © 2009 IBM Corporation
STG Technical Conferences 2009

     Special notices (cont.)
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, IBM i, IBM i (logo), IBM Business Partner
(logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC
System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2
Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000, HASM, IBM
Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power
Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2,
POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10, Workload
Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or
both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S.
registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in
other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml

The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Other company, product and service names may be trademarks or service marks of others.




                                                                                                                                      Revised April 24, 2008


55                       Power Blades Implementation                                                                                          © 2009 IBM Corporation

Contenu connexe

Tendances

Nucleus RM Front IO
Nucleus RM Front IONucleus RM Front IO
Nucleus RM Front IOjledwell
 
2013 02 08 annunci power 7 plus sito cta
2013 02 08 annunci power 7 plus sito cta2013 02 08 annunci power 7 plus sito cta
2013 02 08 annunci power 7 plus sito ctaLorenzo Corbetta
 
Cisco UCS - Servidores
Cisco  UCS  - ServidoresCisco  UCS  - Servidores
Cisco UCS - ServidoresBruno Banha
 
Sun sparc enterprise t5440 server technical presentation
Sun sparc enterprise t5440 server technical presentationSun sparc enterprise t5440 server technical presentation
Sun sparc enterprise t5440 server technical presentationxKinAnx
 
Sun sparc enterprise t5140 and t5240 servers technical presentation
Sun sparc enterprise t5140 and t5240 servers technical presentationSun sparc enterprise t5140 and t5240 servers technical presentation
Sun sparc enterprise t5140 and t5240 servers technical presentationxKinAnx
 
IBM System x3850 X5 Technical Presenation abbrv.
IBM System x3850 X5 Technical Presenation abbrv.IBM System x3850 X5 Technical Presenation abbrv.
IBM System x3850 X5 Technical Presenation abbrv.meye0611
 
Sun sparc enterprise t5120 and t5220 servers technical presentation
Sun sparc enterprise t5120 and t5220 servers technical presentationSun sparc enterprise t5120 and t5220 servers technical presentation
Sun sparc enterprise t5120 and t5220 servers technical presentationxKinAnx
 
First Look Webcast: OneCore Storage SDK 3.6 Roll-out and Walkthrough
First Look Webcast: OneCore Storage SDK 3.6 Roll-out and WalkthroughFirst Look Webcast: OneCore Storage SDK 3.6 Roll-out and Walkthrough
First Look Webcast: OneCore Storage SDK 3.6 Roll-out and WalkthroughEmulex Corporation
 
X3850x5techpresentation09 29-2010-101118124714-phpapp01
X3850x5techpresentation09 29-2010-101118124714-phpapp01X3850x5techpresentation09 29-2010-101118124714-phpapp01
X3850x5techpresentation09 29-2010-101118124714-phpapp01Yalçin KARACA
 
Open Hardware and Future Computing
Open Hardware and Future ComputingOpen Hardware and Future Computing
Open Hardware and Future ComputingGanesan Narayanasamy
 
IBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical PresentationIBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical PresentationCliff Kinard
 

Tendances (19)

IBM zEnterprise FAQs
IBM zEnterprise  FAQsIBM zEnterprise  FAQs
IBM zEnterprise FAQs
 
Gpu archi
Gpu archiGpu archi
Gpu archi
 
Nucleus RM Front IO
Nucleus RM Front IONucleus RM Front IO
Nucleus RM Front IO
 
2013 02 08 annunci power 7 plus sito cta
2013 02 08 annunci power 7 plus sito cta2013 02 08 annunci power 7 plus sito cta
2013 02 08 annunci power 7 plus sito cta
 
Cisco UCS - Servidores
Cisco  UCS  - ServidoresCisco  UCS  - Servidores
Cisco UCS - Servidores
 
Sun sparc enterprise t5440 server technical presentation
Sun sparc enterprise t5440 server technical presentationSun sparc enterprise t5440 server technical presentation
Sun sparc enterprise t5440 server technical presentation
 
System x M4
System x M4System x M4
System x M4
 
Sun sparc enterprise t5140 and t5240 servers technical presentation
Sun sparc enterprise t5140 and t5240 servers technical presentationSun sparc enterprise t5140 and t5240 servers technical presentation
Sun sparc enterprise t5140 and t5240 servers technical presentation
 
IBM System x3850 X5 Technical Presenation abbrv.
IBM System x3850 X5 Technical Presenation abbrv.IBM System x3850 X5 Technical Presenation abbrv.
IBM System x3850 X5 Technical Presenation abbrv.
 
Sun sparc enterprise t5120 and t5220 servers technical presentation
Sun sparc enterprise t5120 and t5220 servers technical presentationSun sparc enterprise t5120 and t5220 servers technical presentation
Sun sparc enterprise t5120 and t5220 servers technical presentation
 
Q7 SoM presentation at FTF India,2011
Q7 SoM presentation at FTF India,2011Q7 SoM presentation at FTF India,2011
Q7 SoM presentation at FTF India,2011
 
IBM System x3850 X5 / x3950 X5 Product Guide
IBM System x3850 X5 / x3950 X5 Product GuideIBM System x3850 X5 / x3950 X5 Product Guide
IBM System x3850 X5 / x3950 X5 Product Guide
 
Nvidia Cuda Apps Jun27 11
Nvidia Cuda Apps Jun27 11Nvidia Cuda Apps Jun27 11
Nvidia Cuda Apps Jun27 11
 
First Look Webcast: OneCore Storage SDK 3.6 Roll-out and Walkthrough
First Look Webcast: OneCore Storage SDK 3.6 Roll-out and WalkthroughFirst Look Webcast: OneCore Storage SDK 3.6 Roll-out and Walkthrough
First Look Webcast: OneCore Storage SDK 3.6 Roll-out and Walkthrough
 
X3850x5techpresentation09 29-2010-101118124714-phpapp01
X3850x5techpresentation09 29-2010-101118124714-phpapp01X3850x5techpresentation09 29-2010-101118124714-phpapp01
X3850x5techpresentation09 29-2010-101118124714-phpapp01
 
Open Hardware and Future Computing
Open Hardware and Future ComputingOpen Hardware and Future Computing
Open Hardware and Future Computing
 
Windows Server on Cisco UCS – Simplify Your Operations!
Windows Server on Cisco UCS – Simplify Your Operations!Windows Server on Cisco UCS – Simplify Your Operations!
Windows Server on Cisco UCS – Simplify Your Operations!
 
IBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical PresentationIBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical Presentation
 
V Evohd Intel
V Evohd IntelV Evohd Intel
V Evohd Intel
 

En vedette

Cpsp overview of the subject power point
Cpsp overview of the subject power pointCpsp overview of the subject power point
Cpsp overview of the subject power pointProf Patrick McNamee
 
LinkedIn for Buisness - Half Day Workshop Design - Trigger Strategies
LinkedIn for Buisness - Half Day Workshop Design - Trigger StrategiesLinkedIn for Buisness - Half Day Workshop Design - Trigger Strategies
LinkedIn for Buisness - Half Day Workshop Design - Trigger StrategiesNeil Thornton HBA, MA
 
Cloenda club de lectura d'adults
Cloenda club de lectura d'adultsCloenda club de lectura d'adults
Cloenda club de lectura d'adultsBiblioteca Almenar
 
Literature searching
Literature searchingLiterature searching
Literature searchingFowler Susan
 
Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...
Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...
Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...Neil Thornton HBA, MA
 

En vedette (20)

What is the current thinking on global climate change?
What is the current thinking on global climate change?What is the current thinking on global climate change?
What is the current thinking on global climate change?
 
THE 2013 HURRICANE SEASON IS ONLY 5 MONTHS AWAY
THE 2013 HURRICANE SEASON IS ONLY 5 MONTHS AWAYTHE 2013 HURRICANE SEASON IS ONLY 5 MONTHS AWAY
THE 2013 HURRICANE SEASON IS ONLY 5 MONTHS AWAY
 
Cpsp overview of the subject power point
Cpsp overview of the subject power pointCpsp overview of the subject power point
Cpsp overview of the subject power point
 
Part 3 Three Steps Towards Global Disaster Resilience
Part 3  Three Steps Towards Global Disaster Resilience Part 3  Three Steps Towards Global Disaster Resilience
Part 3 Three Steps Towards Global Disaster Resilience
 
Buying A Home
Buying A HomeBuying A Home
Buying A Home
 
Virtual embedding
Virtual embeddingVirtual embedding
Virtual embedding
 
Een vergadering digitale etalages
Een vergadering digitale etalagesEen vergadering digitale etalages
Een vergadering digitale etalages
 
The challenge of disaster resilience in the framework of 21st century reality
The challenge of disaster resilience in the framework of 21st century realityThe challenge of disaster resilience in the framework of 21st century reality
The challenge of disaster resilience in the framework of 21st century reality
 
THE 2014 OUTBREAK OF EBOLA: UNDERSTANDING DISEASE AND DISASTER RISK AND RISK ...
THE 2014 OUTBREAK OF EBOLA: UNDERSTANDING DISEASE AND DISASTER RISK AND RISK ...THE 2014 OUTBREAK OF EBOLA: UNDERSTANDING DISEASE AND DISASTER RISK AND RISK ...
THE 2014 OUTBREAK OF EBOLA: UNDERSTANDING DISEASE AND DISASTER RISK AND RISK ...
 
LinkedIn for Buisness - Half Day Workshop Design - Trigger Strategies
LinkedIn for Buisness - Half Day Workshop Design - Trigger StrategiesLinkedIn for Buisness - Half Day Workshop Design - Trigger Strategies
LinkedIn for Buisness - Half Day Workshop Design - Trigger Strategies
 
Cloenda club de lectura d'adults
Cloenda club de lectura d'adultsCloenda club de lectura d'adults
Cloenda club de lectura d'adults
 
Super Typhoon Usagi Headed Towards The Philippines, Taiwan, And China
Super Typhoon Usagi Headed Towards The Philippines, Taiwan, And ChinaSuper Typhoon Usagi Headed Towards The Philippines, Taiwan, And China
Super Typhoon Usagi Headed Towards The Philippines, Taiwan, And China
 
Part II The Case For A Major Paradigmn Shift Towards Disaster Resiliency Dur...
Part II  The Case For A Major Paradigmn Shift Towards Disaster Resiliency Dur...Part II  The Case For A Major Paradigmn Shift Towards Disaster Resiliency Dur...
Part II The Case For A Major Paradigmn Shift Towards Disaster Resiliency Dur...
 
Jornadaav
JornadaavJornadaav
Jornadaav
 
The 2013 Atlantic basin hurricane season and a look back at 2012
The 2013 Atlantic basin hurricane season and a look back at 2012The 2013 Atlantic basin hurricane season and a look back at 2012
The 2013 Atlantic basin hurricane season and a look back at 2012
 
Literature searching
Literature searchingLiterature searching
Literature searching
 
Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...
Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...
Change Management, 'What is Working in the Real World.'- HRPA Niagara - Septe...
 
積算ソフト虎の巻
積算ソフト虎の巻積算ソフト虎の巻
積算ソフト虎の巻
 
Part 3 Three Steps Towards Global Disaster Resilience
Part 3  Three Steps Towards Global Disaster ResiliencePart 3  Three Steps Towards Global Disaster Resilience
Part 3 Three Steps Towards Global Disaster Resilience
 
Lessons learned from past catastrophic flooding in russia
Lessons learned from past catastrophic flooding in russiaLessons learned from past catastrophic flooding in russia
Lessons learned from past catastrophic flooding in russia
 

Similaire à Power Blades Implementation

Fujitsu Presents Post-K CPU Specifications
Fujitsu Presents Post-K CPU SpecificationsFujitsu Presents Post-K CPU Specifications
Fujitsu Presents Post-K CPU Specificationsinside-BigData.com
 
Heterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of SystemsHeterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of SystemsAnand Haridass
 
Intro to Cell Broadband Engine for HPC
Intro to Cell Broadband Engine for HPCIntro to Cell Broadband Engine for HPC
Intro to Cell Broadband Engine for HPCSlide_N
 
Power8 sales exam prep
 Power8 sales exam prep Power8 sales exam prep
Power8 sales exam prepJason Wong
 
SUN主机产品介绍.ppt
SUN主机产品介绍.pptSUN主机产品介绍.ppt
SUN主机产品介绍.pptPencilData
 
00 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver200 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver2Yutaka Kawai
 
Power 7 Overview
Power 7 OverviewPower 7 Overview
Power 7 Overviewlambertt
 
NVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To FallNVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To Fallinside-BigData.com
 
QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)Heiko Joerg Schick
 
Morello Technology Demonstrator Hardware Overview - Mark Inskip, Arm
Morello Technology Demonstrator Hardware Overview - Mark Inskip, ArmMorello Technology Demonstrator Hardware Overview - Mark Inskip, Arm
Morello Technology Demonstrator Hardware Overview - Mark Inskip, ArmKTN
 
Enterprise power systems transition to power7 technology
Enterprise power systems transition to power7 technologyEnterprise power systems transition to power7 technology
Enterprise power systems transition to power7 technologysolarisyougood
 
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010Altera Corporation
 
QsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale SystemsQsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale SystemsFederica Pisani
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIBM Switzerland
 

Similaire à Power Blades Implementation (20)

IBM I and blade center update 2009
IBM I and blade center update 2009IBM I and blade center update 2009
IBM I and blade center update 2009
 
Fujitsu Presents Post-K CPU Specifications
Fujitsu Presents Post-K CPU SpecificationsFujitsu Presents Post-K CPU Specifications
Fujitsu Presents Post-K CPU Specifications
 
Heterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of SystemsHeterogeneous Computing : The Future of Systems
Heterogeneous Computing : The Future of Systems
 
Intro to Cell Broadband Engine for HPC
Intro to Cell Broadband Engine for HPCIntro to Cell Broadband Engine for HPC
Intro to Cell Broadband Engine for HPC
 
Power8 sales exam prep
 Power8 sales exam prep Power8 sales exam prep
Power8 sales exam prep
 
SUN主机产品介绍.ppt
SUN主机产品介绍.pptSUN主机产品介绍.ppt
SUN主机产品介绍.ppt
 
00 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver200 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver2
 
The Cell Processor
The Cell ProcessorThe Cell Processor
The Cell Processor
 
CSL_Cochin_c
CSL_Cochin_cCSL_Cochin_c
CSL_Cochin_c
 
Power 7 Overview
Power 7 OverviewPower 7 Overview
Power 7 Overview
 
NVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To FallNVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To Fall
 
QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
 
Morello Technology Demonstrator Hardware Overview - Mark Inskip, Arm
Morello Technology Demonstrator Hardware Overview - Mark Inskip, ArmMorello Technology Demonstrator Hardware Overview - Mark Inskip, Arm
Morello Technology Demonstrator Hardware Overview - Mark Inskip, Arm
 
Enterprise power systems transition to power7 technology
Enterprise power systems transition to power7 technologyEnterprise power systems transition to power7 technology
Enterprise power systems transition to power7 technology
 
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
AMC & VPX Form Factor Boards With High Speed SERDES: Embedded World 2010
 
Andes RISC-V processor solutions
Andes RISC-V processor solutionsAndes RISC-V processor solutions
Andes RISC-V processor solutions
 
QsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale SystemsQsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale Systems
 
IBM HPC Transformation with AI
IBM HPC Transformation with AI IBM HPC Transformation with AI
IBM HPC Transformation with AI
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
 
Phytium 64 core cpu preview
Phytium 64 core cpu previewPhytium 64 core cpu preview
Phytium 64 core cpu preview
 

Plus de Andrey Klyachkin

Matching Cisco and System p
Matching Cisco and System pMatching Cisco and System p
Matching Cisco and System pAndrey Klyachkin
 
Multiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power SystemsMultiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power SystemsAndrey Klyachkin
 
Buch des jungen Kämpfers mit AIX V6 - Teil 03
Buch des jungen Kämpfers mit AIX V6 - Teil 03Buch des jungen Kämpfers mit AIX V6 - Teil 03
Buch des jungen Kämpfers mit AIX V6 - Teil 03Andrey Klyachkin
 
Buch des jungen Kämpfers mit AIX V6 - Teil 02
Buch des jungen Kämpfers mit AIX V6 - Teil 02Buch des jungen Kämpfers mit AIX V6 - Teil 02
Buch des jungen Kämpfers mit AIX V6 - Teil 02Andrey Klyachkin
 
Buch des jungen Kämpfers mit AIX V6 - Teil 01
Buch des jungen Kämpfers mit AIX V6 - Teil 01Buch des jungen Kämpfers mit AIX V6 - Teil 01
Buch des jungen Kämpfers mit AIX V6 - Teil 01Andrey Klyachkin
 
Faster Than A Speeding Disk
Faster Than A Speeding DiskFaster Than A Speeding Disk
Faster Than A Speeding DiskAndrey Klyachkin
 
Power Systems 2009 Hardware
Power Systems 2009 HardwarePower Systems 2009 Hardware
Power Systems 2009 HardwareAndrey Klyachkin
 

Plus de Andrey Klyachkin (11)

Matching Cisco and System p
Matching Cisco and System pMatching Cisco and System p
Matching Cisco and System p
 
Multiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power SystemsMultiple Shared Processor Pools In Power Systems
Multiple Shared Processor Pools In Power Systems
 
Buch des jungen Kämpfers mit AIX V6 - Teil 03
Buch des jungen Kämpfers mit AIX V6 - Teil 03Buch des jungen Kämpfers mit AIX V6 - Teil 03
Buch des jungen Kämpfers mit AIX V6 - Teil 03
 
Buch des jungen Kämpfers mit AIX V6 - Teil 02
Buch des jungen Kämpfers mit AIX V6 - Teil 02Buch des jungen Kämpfers mit AIX V6 - Teil 02
Buch des jungen Kämpfers mit AIX V6 - Teil 02
 
Buch des jungen Kämpfers mit AIX V6 - Teil 01
Buch des jungen Kämpfers mit AIX V6 - Teil 01Buch des jungen Kämpfers mit AIX V6 - Teil 01
Buch des jungen Kämpfers mit AIX V6 - Teil 01
 
Taking Advantage Of COD
Taking Advantage Of CODTaking Advantage Of COD
Taking Advantage Of COD
 
Racking Your System
Racking Your SystemRacking Your System
Racking Your System
 
Faster Than A Speeding Disk
Faster Than A Speeding DiskFaster Than A Speeding Disk
Faster Than A Speeding Disk
 
BladeCenter 101
BladeCenter 101BladeCenter 101
BladeCenter 101
 
Power Systems 2009 Hardware
Power Systems 2009 HardwarePower Systems 2009 Hardware
Power Systems 2009 Hardware
 
AIX and PowerVM Update
AIX and PowerVM UpdateAIX and PowerVM Update
AIX and PowerVM Update
 

Power Blades Implementation

  • 1. Power Blades Implementation Mike Schambureck with help from Janus Hertz schambur@us.ibm.com, IBM Systems Lab Services and Training STG Technical Conferences 2009 © 2009 IBM Corporation
  • 2. STG Technical Conferences 2009 Agenda Where to start an IBM i on blade implementation Hardware overview: – Power blade servers technical overview – New expansion adapters – BladeCenter S components and I/O connections – BladeCenter H components and I/O connections – Switch module portfolio – Expansion adapter portfolio for IBM i Virtualization overview – VIOS-based virtualization – IVM overview – Storage options for BladeCenter H and BladeCenter S – Multiple Virtual SCSI adapters – Virtual tape – Active Memory Sharing on blade 4Q 2009 enhancements 2 Power Blades Implementation © 2009 IBM Corporation
  • 3. STG Technical Conferences 2009 Where Do I Start with Installing IBM i on Blade? • Latest versions at: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 3 Power Blades Implementation © 2009 IBM Corporation
  • 4. STG Technical Conferences 2009 IBM BladeCenter JS23 Express 2 sockets, 4 POWER6 cores @ 4.2 GHz Enhanced 65-nm lithography 32 MB L3 cache per socket 4 MB L2 cache per core 8 VLP DIMM slots, up to 64 GB memory FSP-1 service processor 2 x 1Gb embedded Ethernet ports (HEA) 2 PCIe connectors (CIOv and CFFh) 1 x onboard SAS controller Up to 1 SSD or SAS onboard disk EnergyScale™ power management PowerVM Hypervisor virtualization 4 Power Blades Implementation © 2009 IBM Corporation
  • 5. STG Technical Conferences 2009 IBM BladeCenter JS23 Express 5 Power Blades Implementation © 2009 IBM Corporation
  • 6. STG Technical Conferences 2009 IBM BladeCenter JS43 Express 4 sockets, 8 POWER6 cores @ 4.2 GHz Enhanced 65-nm lithography 32 MB L3 cache per socket 4 MB L2 cache per core 16 VLP DIMM slots, up to 128 GB memory FSP-1 service processor 4 x 1Gb embedded Ethernet ports (HEA) 4 PCIe connectors (CIOv and CFFh) + 1 x onboard SAS controller Up to 2 SSD or SAS onboard disks EnergyScale™ power management PowerVM Hypervisor virtualization 6 Power Blades Implementation © 2009 IBM Corporation
  • 7. STG Technical Conferences 2009 IBM BladeCenter JS43 Express SMP Unit Only 7 Power Blades Implementation © 2009 IBM Corporation
  • 8. STG Technical Conferences 2009 IBM BladeCenter JS12 SAS disk drive SAS disk drive 8 DDR2 DIMMs 64 GB max SAS Exp. Adapter 1 socket x 2 cores @ 3.8 GHz P5IOC2 I/O chip (2 HEA ports) PCI-X (CFFv) connections Service Processor PCIe (CFFh) connection 8 Power Blades Implementation © 2009 IBM Corporation
  • 9. STG Technical Conferences 2009 IBM BladeCenter JS22 4 DDR2 DIMMs SAS disk drive 2 sockets x 2 cores 32 GB max @ 4 GHz SAS Controller P5IOC2 I/O chip (2 IVE ports) PCI-X (CFFv) connections Service processor PCIe (CFFh) connection 9 Power Blades Implementation © 2009 IBM Corporation
  • 10. STG Technical Conferences 2009 CFFv and CFFh I/O Expansion Adapters Combination Form Factor (CFF) allows for 2 different expansion adapters on the same HSSM1 blade HSSM3 CFFv (Combo Form Factor – Vertical) Connects to PCI-X bus to provide access to switch CFFX CFFv SM3 modules in bays 3 & 4 SerDes SM4 Vertical switch form factor PCI-X Supported for IBM i: SAS (#8250) CFFh (Combo Form Factor – Horizontal) HSSM2 HSSM4 Connects to PCIe bus to provide access to the switch CFFE CFFh modules in bays 7 – 10 Horizontal switch form factor, unless MSIM used PCI-Express Supported for IBM i: Fibre Channel and Ethernet (#8252) Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 10 Power Blades Implementation © 2009 IBM Corporation
  • 11. STG Technical Conferences 2009 CIOv and CFFh I/O Expansion Adapters Combination I/O Form Factor – Vertical is available only on JS23 and JS43 CFFv adapters not supported on JS23 and JS43 CIOv Connects to new PCIe bus to provide access to switch modules in bays 3 & 4 Vertical switch form factor Supported for IBM i: SAS passthrough (#8246), Fibre Channel (#8240, #8241, #8242) Can provide redundant FC connections CFFh Connects to PCIe bus to provide access to the switch modules in bays 7 – 10 Horizontal switch form factor, unless MSIM used Supported for IBM i: Fibre Channel and Ethernet Note: See IBM i on Power Blade Supported Environments for (#8252) hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 11 Power Blades Implementation © 2009 IBM Corporation
  • 12. STG Technical Conferences 2009 Meet the BladeCenter S – Front View Service label cards slot enable quick and easy reference to BladeCenter S SAS and SATA disks can be mixed SAS disks recommended for IBM i production RAID 0, 1, 5, 0+1 supported with RAID SAS Switch Module (RSSM) Separate RAID arrays for IBM i recommended 7U Supports up to 6 BladeServers Shared USB ports and CD-RW / DVD-ROM Combo Battery Backup Units for use only with RAID SAS Switch Module 12 Power Blades Implementation © 2009 IBM Corporation
  • 13. STG Technical Conferences 2009 Meet the BladeCenter S – Rear View Hot-swap Power Supplies 3 & 4 are optional, Hot-swap Power Supplies 1 & 2 are Auto-sensing b/w 950W / 1450W standard, Auto-sensing b/w 950W / 1450W Power supplies 3 and 4 required if using > 1 blade 7U Top: AMM standard Bottom: Serial Pass-thru Module optional Four Blower modules standard Top(SW1) & Bottom(SW2) left: Ethernet Top(SW3) & Bottom(SW4) right: SAS Both CIOv (#8246) and CFFv (#8250) adapters supported 13 Power Blades Implementation © 2009 IBM Corporation
  • 14. STG Technical Conferences 2009 BladeCenter S Midplane - Blade to I/O Bay Mapping AMM Bay Blade “A” I/O Bay 1 #1 Ethernet Bay “B” Blade #2 Blade #3 Blade #4 Blade #5 I/O Bay 3 Blade #6 ENet Switch Fibre SAS Switch Bay SAS “A” “B” RAID Battery Bay PCI-X (CFFv) or PCIe (CIOv) D.C. Blade D.C. Blade #1 Blade Daughter Card D.C. Blade #2 eNet, Fibre, SAS, SAS RAID D.C. Blade #3 I/O Bay 4 #4 D.C. Blade ENet Switch #5 D.C. Blade Fibre #6 SAS SAS Switch Bay “A” RAID Battery Bay “B” C.C. Blade I/O Bay 2 PCI-E (CFFh) C.C. Blade #1 Blade Daughter Card C.C. Blade #2 Option Bay C.C. Blade #3 C.C. Blade #4 C.C. Blade #5 #6 BC-S Mid-Plane 14 Power Blades Implementation © 2009 IBM Corporation
  • 15. STG Technical Conferences 2009 BladeCenter H - front view Power Module 3 Power Filler Module 1 and Fan pack Front System HS20 Panel Blade # 1 9U CD DVD- drive Blade Filler Front USB Power Power Module 2 Module 4 Filler and Fan pack 15 Power Blades Implementation © 2009 IBM Corporation
  • 16. STG Technical Conferences 2009 IBM BladeCenter H - Rear View • Multi-Switch Interconnect Module • Ethernet switch (left side bay 9) I/O module bay 7 and 8 • Fibre Channel switch (right side bay 10) Power Power Connector 2 Connector 1 Ethernet SAS or I/O Module switch I/O Module bay 1 bay 3 Fibre I/O Module bay 5 Advanced Channel Management module Blower Module 1 Module 1 and 2 Advanced Ethernet Management switch I/O Module bay 2 Module 2 slot I/O Module bay 6 I/O Module bay 4 Rear LED panel and Serial connector Left Shuttle Right Shuttle release lever release lever I/O module bay 9 and 10 • Multi-Switch Interconnect Module • Ethernet switch (left side bay 9) • Fibre Channel switch (right side bay 10) 16 Power Blades Implementation © 2009 IBM Corporation
  • 17. STG Technical Conferences 2009 BCH: CFFv and CFFh I/O Connections Blade #N On-Board Dual Switch #1 Gbit Ethernet Ethernet On-Board Dual Gbit Ethernet M I Switch #2 POWER SAS CFFv Ethernet Expansion Card D Blade Server #1 P L Switch #3 QLogic CFFh A Expansion Card N Switch #4 E QLogic CFFh Expansion Card: Switch #7 • Provides 2 x 4Gb Fibre Channel connections to SAN • 2 Fibre Channel ports externalized via Switch 8 & 10 • Provides 2 x 1 Gb Ethernet ports for additional networking • 2 Ethernet ports externalized via Switch 7 & 9 Switch #8 SAS CFFv Expansion Card: • Provides 2 SAS ports for connection to SAS tape drive Switch #9 • 2 SAS ports externalized via Switch 3 & 4 Switch #10 17 Power Blades Implementation © 2009 IBM Corporation
  • 18. STG Technical Conferences 2009 BCH: CIOv and CFFh I/O Connections Blade #N On-Board Dual Switch #1 Gbit Ethernet Ethernet On-Board Dual Gbit Ethernet M I Switch #2 POWER CIOv Expansion Ethernet Card D Blade Server #1 P L Switch #3 QLogic CFFh A Expansion Card N Switch #4 CIOv Expansion Card: E Switch #7 • 2 x 8Gb or 2 x 4Gb Fibre Channel • OR, 2 x 3Gb SAS passthrough • Uses 4Gb or 8Gb FC vertical switches in bays 3 & 4 Switch #8 • OR, 3Gb SAS vertical switches in bays 3 & 4 • Redundant FC storage connection option for IBM i CFFh Expansion Card: Switch #9 • 2 x 4Gb and 2 x 1Gb Ethernet Switch #10 18 Power Blades Implementation © 2009 IBM Corporation
  • 19. STG Technical Conferences 2009 BladeCenter Ethernet I/O Modules Nortel Layer 2/3 Gb Cisco Systems Nortel L2-7 GbE Switch Nortel L2/3 10GbE Ethernet Switch Intelligent Gb Ethernet Module Uplink Switch Module Modules Switch Module Copper Pass-Through Nortel 10Gb Ethernet Intelligent Copper Module Switch Module Pass-Through Module Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 19 Power Blades Implementation © 2009 IBM Corporation
  • 20. STG Technical Conferences 2009 BladeCenter Fibre Channel I/O Modules Cisco 4Gb 10 and 20 Brocade 4Gb 10 and QLogic 8Gb 20 port QLogic 4Gb 10 and 20 port Fibre Channel 20 port Fibre Channel Fibre Channel Switch port Fibre Channel Switch Modules Switch Modules Module Switch Module Brocade Intelligent 8Gb Brocade Intelligent 4Gb Pass-Thru Fibre Channel Pass-Thru Fibre Channel Switch Module Switch Module Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 20 Power Blades Implementation © 2009 IBM Corporation
  • 21. STG Technical Conferences 2009 BladeCenter SAS I/O Modules BladeCenter S SAS RAID Controller Module • Supported only in BladeCenter S • RAID support for SAS drives in chassis • Supports SAS tape attachment • No support for attaching DS3200 • 2 are always required BladeCenter SAS Controller Module • Supported in BladeCenter S and BladeCenter H • No RAID support • Supports SAS tape attachment • Supports DS3200 attachment Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 21 Power Blades Implementation © 2009 IBM Corporation
  • 22. STG Technical Conferences 2009 SAS RAID Controller Switch Module RAID controller support provides additional protection options for BladeCenter S storage SAS RAID Controller Switch Module – High-performance, fully duplex, 3Gbps speeds – Support for RAID 0, 1, 5, & 10 – Supports 2 disk storage modules with up to 12 SAS drives – Supports external SAS tape drive – Supports existing #8250 CFFv SAS adapter on blade – 1GB of battery-backed write cache between the 2 modules – Two SAS RAID Controller Switch Modules (#3734) required Supports Power and x86 Blades – Recommend separate RAID sets – For each IBM i partition – For IBM i and Windows storage – Requirements – Firmware update for SAS RAID Controller Switch Modules – VIOS 2.1.1, eFW 3.4.2 Note: Does not support connection to DS3200 IBM i is not pre-installed with RSSM configurations 22 Power Blades Implementation © 2009 IBM Corporation
  • 23. STG Technical Conferences 2009 Multi-switch Interconnect Module for BCH • Installed in high-speed bays 7 & 8 and/or 9 & 10 • Allows a “vertical” switch to be MSIM installed and use the “horizontal” high- speed fabric (bays 7 – 10) • High-speed fabric is used by CFFh expansion adapters • Fibre Channel switch module must be installed in right I/O module bay (switch bay 8 or 10) • If additional Ethernet networking required additional Ethernet switch module can be installed in left I/O module bay (switch bay 7 or 9) 23 Power Blades Implementation © 2009 IBM Corporation
  • 24. STG Technical Conferences 2009 I/O Expansion Adapters #8252 QLogic Ethernet and 4Gb Fibre #8250 LSI 3Gb SAS Dual Channel Expansion Card (CFFh) Port Expansion Card (CFFv) #8246 3Gb SAS #8240 Emulex 8Gb #8242 QLogic 8Gb #8241 QLogic 4Gb Passthrough Expansion Fibre Channel Fibre Channel Fibre Channel Card (CIOv) Expansion Card (CIOv) Expansion Card (CIOv) Expansion Card (CIOv) Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 24 Power Blades Implementation © 2009 IBM Corporation
  • 25. STG Technical Conferences 2009 Virtualization Overview 25 Power Blades Implementation © 2009 IBM Corporation
  • 26. STG Technical Conferences 2009 VIOS, IVM and i on Power Blade Linux AIX VIOS = Virtual I/O Server = Client Client virtualization software in a partition HEA HEA HEA HEA Does not run other applications First LPAR installed on blade VIOS owns physical hardware (Fibre CFFh FC USB and/or CFFv SAS exp card Channel, Ethernet, DVD, SAS) exp card SAS HEA or CIOv FC exp card or VIOS virtualizes disk, DVD, CIOv SAS exp card VIOS / IVM SSD networking, tape to i partitions SAS Switch FC Switch IVM = Integrated Virtualization Manager = browser interface to manage DS3400 LAN DVD DS4700 partitions, virtualization DS3200* DS4800 IVM / Virtual Op Panel SAS-attached DS8100 IVM installed with VIOS LTO4 tape drive DS8300 (virtual tape) SVC i uses LAN console through Virtual AMM / LAN Console Ethernet bridge in VIOS * Not supported with RSSM 26 Power Blades Implementation © 2009 IBM Corporation
  • 27. STG Technical Conferences 2009 Integrated Virtualization Manager (IVM) Introduction Browser-based interface, supports Mozilla Firefox and Internet Explorer Part of VIOS, no extra charge or installation Performs LPAR and virtualization management on POWER6 blade 27 Power Blades Implementation © 2009 IBM Corporation
  • 28. STG Technical Conferences 2009 IVM Example: Create i Partition Fewer steps than HMC IVM uses several defaults Virtual I/O resources only for IBM i partitions 28 Power Blades Implementation © 2009 IBM Corporation
  • 29. STG Technical Conferences 2009 Storage, Tape and DVD for i on JS12/JS22 in BCH MSIM with Fibre Channel I/O module inside VIOS Host i Client Fibre Channel Storage hdiskX LUNs DDxx CFFh Fibre Channel BladeCenter midplane I/O module Virtual SCSI connection SAS Storage and/or tape DS3200 CFFv Virtual SCSI SAS I/O module connection TS2240 USB OPTxx /dev/cd0 DVD DVD Media tray Power Blade With BCH and JS12/JS22, IBM i can use: Fibre Channel storage (MSIM, FC module and CFFh adapter required) SAS storage (SAS module and CFFv adapter required) SAS tape (SAS module and CFFv adapter required) USB DVD in BladeCenter Physical I/O resources are attached to VIOS, assigned to IBM i in IVM Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used 29 Power Blades Implementation © 2009 IBM Corporation
  • 30. STG Technical Conferences 2009 Storage, Tape and DVD for i on JS23/JS43 in BCH MSIM with Fibre Channel I/O module inside VIOS Host i Client Fibre Channel Storage hdiskX LUNs DDxx CFFh Fibre Channel BladeCenter midplane I/O module Virtual SCSI connection CIOv SAS Storage and/or tape OR DS3200 CIOv Virtual SCSI SAS I/O module connection TS2240 USB OPTxx /dev/cd0 DVD DVD Media tray Power Blade With BCH and JS23/JS43, IBM i can use: Fibre Channel storage (MSIM, FC module and CFFh adapter required; or FC module and CIOv adapter required) Redundant FC adapters can be configured (CFFh and CIOv) SAS storage (SAS module and CIOv adapter required) SAS tape (SAS module and CIOv adapter required) USB DVD in BladeCenter Physical I/O resources are attached to VIOS, assigned to IBM i in IVM Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used 30 Power Blades Implementation © 2009 IBM Corporation
  • 31. STG Technical Conferences 2009 Storage, Tape and DVD for i on JS12/JS22 in BCS SAS drives in BCS Non-RAID SAS VIOS Host IBM i Client module in I/O Bay 3/4 hdiskX LUNs DDxx BladeCenter midplane Virtual SCSI connection RAID SAS module SAS TS2240 in I/O Bay 3 & 4 CFFv DS3200 Virtual SCSI connection USB OPTxx /dev/cd0 DVD DVD Media tray Power Blade With BCS and JS12/JS22, IBM i can use: SAS storage (SAS module and CFFv adapter required) SAS tape (SAS module and CFFv adapter required) USB DVD Drives in BCS, TS2240, DS3200 supported with Non-RAID SAS Switch Module (NSSM) Only drives in BCS and TS2240 supported with RAID SAS Switch Module (RSSM) Physical I/O resources are attached to VIOS, assigned to IBM i in IVM Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used 31 Power Blades Implementation © 2009 IBM Corporation
  • 32. STG Technical Conferences 2009 Storage, Tape and DVD for i on JS23/JS43 in BCS SAS drives in BCS Non-RAID SAS VIOS Host IBM i Client module in I/O Bay 3/4 hdiskX LUNs DDxx BladeCenter midplane Virtual SCSI connection RAID SAS module SAS TS2240 in I/O Bay 3 & 4 CIOv DS3200 Virtual SCSI connection USB OPTxx /dev/cd0 DVD DVD Media tray Power Blade With BCS and JS23/JS43, IBM i can use: SAS storage (SAS module and CIOv adapter required) SAS tape (SAS module and CIOv adapter required) USB DVD Drives in BCS, TS2240, DS3200 supported with Non-RAID SAS Switch Module (NSSM) Only drives in BCS and TS2240 supported with RAID SAS Switch Module (RSSM) Physical I/O resources are attached to VIOS, assigned to IBM i in IVM Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used 32 Power Blades Implementation © 2009 IBM Corporation
  • 33. STG Technical Conferences 2009 Storage and Tape Support Storage support – BladeCenter H and JS12/JS22/JS23/JS43: – SAS – DS3200 – Fibre Channel – DS3400, DS4700, DS4800, DS8100, DS5020, DS5100, DS5300, XIV, DS8300, DS8700, SVC – Multiple storage subsystems supported with SVC – BladeCenter S and JS12/JS22/JS23/JS43: – SAS – BCS drives; DS3200 (only with NSSM) Tape support – BladeCenter H and BladeCenter S: – TS2240 LTO-4 SAS – supported for virtual tape and for VIOS backups – TS2230 LTO-3 SAS – not supported for virtual tape, only for VIOS backups – NEW support for Fibre Channel tape library support announced 20/10/2009! – Enables access to tape libraries 3584 (TS3500) and 3573 (TS3100 and TS3200) – Requires selected 8GB Fibre Channel Adapters 33 Power Blades Implementation © 2009 IBM Corporation
  • 34. STG Technical Conferences 2009 Configuring Storage for IBM i on Blade Step 1: Perform sizing – Use Disk Magic, where applicable – Use the PCRM, Ch. 14.5 – http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html – Number of physical drives is still most important – VIOS itself does not add significant disk I/O overhead – For production workloads, keep each i partition on a separate RAID array Step 2: Use appropriate storage UI and Redbook for your environment to create LUNs for IBM i and attach to VIOS (or use TPC or SSPC where applicable) Storage Configuration DS Storage Manager for DS8000 Storage Manager SVC Console for Manager for NSSM and DS3200, DS3400, DS4700, for DS8100 and DS8300 SVC RSSM DS4800 34 Power Blades Implementation © 2009 IBM Corporation
  • 35. STG Technical Conferences 2009 Configuring Storage for IBM i on Blade, Cont. Step 3: Assign LUNs or physical drives in BCS to IBM i – ‘cfgdev’ in VIOS CLI necessary to detect new physical volumes if VIOS is running – Virtualize whole LUNs/drives (“physical volumes”) to IBM i – Do not use storage pools in VIOS 35 Power Blades Implementation © 2009 IBM Corporation
  • 36. STG Technical Conferences 2009 Multiple Virtual SCSI Adapters for IBM i Since VIOS 2.1 in November 2008, IBM i is no longer limited to 1 VSCSI connection to VIOS and 16 disk + 16 optical devices What IVM will do: – Create 1 VSCSI server adapter in VIOS for each IBM i partition created – Create 1 VSCSI client adapter in IBM i and correctly map to Server adapter – Map any disk and optical devices you assign to IBM i to the first VSCSI server adapter in VIOS – Create a new VSCSI server-client adapter pair only when you assign a tape device to IBM i – Create another VSCSI server-client adapter pair when you assign another tape device What IVM will not do: – Create a new VSCSI server-client adapter pair if you assign more than 16 disk devices to IBM i 36 Power Blades Implementation © 2009 IBM Corporation
  • 37. STG Technical Conferences 2009 Multiple Virtual SCSI Adapters for IBM i, Cont. Scenario I: you have <=16 disk devices and you want to add virtual tape – Action required in VIOS: – In IVM, click on tape drive, assign to IBM i partition – Separate VSCSI server-client adapter pair created automatically Scenario II: you have 16 disk devices and you want to add more disk and virtual tape – Actions required in VIOS: – In VIOS CLI, create new VSCSI client adapter in IBM i – VSCSI server adapter in VIOS created automatically – In VIOS CLI, map new disk devices to new VSCSI server adapter using ‘mkvdev’ – In IVM, click on tape drive, assign to IBM i partition For details and instructions, see IBM i on Blade Read-me First: http://www.ibm.com/systems/power/hardware/blades/ibmi.html 37 Power Blades Implementation © 2009 IBM Corporation
  • 38. STG Technical Conferences 2009 IBM i Support for Virtual Tape Virtual tape support enables IBM i partitions to directly backup to PowerVM VIOS attached tape drive saving hardware costs and management time Simplifies backup and restore processing with BladeCenter implementations – IBM i 6.1 partitions on BladeCenter JS12, JS22, JS23, JS43 – Supports IBM i save/restore commands & BRMS – Supports BladeCenter S and H implementations Simplifies migration to blades from tower/rack servers – LTO-4 drive can read backup tapes from LTO-2, 3, 4 drives Supports IBM Systems Storage SAS LTO-4 Drive – TS2240 SAS for BladeCenter ONLY – Fibre Channel attached tape libraries 3584 (TS3500) and 3573 (TS3100 and TS3200) Requirements – VIOS 2.1.1, eFW 3.4.2, IBM i 6.1 PTFs 38 Power Blades Implementation © 2009 IBM Corporation
  • 39. STG Technical Conferences 2009 Virtual Tape Hardware and Virtualization VIOS Host IBM i Client SAS-attached LTO4 SAS I/O module BladeCenter midplane tape drive (TS2240) CFFv SAS OR OR /dev/rmt0 TAP01 RAID SAS I/O CIOv Separate SAS 3580 004 module Virtual SCSI connection Power Blade TS2240 LTO4 SAS tape drive attached to SAS switch in BladeCenter: – NSSM or RSSM in BCS (shown above) – NSSM in BCH Fibre Channel attached tape libraries 3584 (TS3500) and 3573 (TS3100 and TS3200) in BC-H VIOS virtualizes tape drive to IBM i directly Tape drive assigned to IBM i in IVM Tape drive available in IBM i as TAPxx, type 3580 model 004 39 Power Blades Implementation © 2009 IBM Corporation
  • 40. STG Technical Conferences 2009 Assigning Virtual Tape to IBM i No action required in IBM i to make tape drive available – If QAUTOCFG is on (default) 40 Power Blades Implementation © 2009 IBM Corporation
  • 41. STG Technical Conferences 2009 Migrating IBM i to Blade Virtual tape makes migration to blade similar to migration to tower/rack server: – On existing system, go save option 21 on tape media – On blade, use virtual tape to perform D-mode IPL and complete restore – Existing system does not have to be at IBM i 6.1 – Previous-to-current migration also possible IBM i partition saved on blade can be restored on tower/rack server – IBM i can save to tape media on blade For existing servers that do not have access to tape drive, there are two options: – Save on different media, convert to supported tape format as a service, restore from tape – Use Migration Assistant method 41 Power Blades Implementation © 2009 IBM Corporation
  • 42. STG Technical Conferences 2009 Networking on Power Blade VIOS Host i Client Ethernet I/O module Embedded Ethernet BladeCenter midplane ports on blade 10.10.10.35 IVE Local PC for: (HEA) CMN01 10.10.10.20 AMM browser Virtual LAN LAN console IVM browser connection LAN console LAN IVE Virtual 10.10.10.37 Ethernet CMN02 (HEA) bridge IVE Production interface 10.10.10.38 10.10.10.5 Power Blade VIOS is accessed from local PC via embedded Ethernet ports on blade (IVE/HEA) For both IVM browser and VIOS command line Same PC can be used to connect to AMM and for LAN console for IBM i For i connectivity, IVE/HEA port is bridged to Virtual LAN 42 Power Blades Implementation © 2009 IBM Corporation
  • 43. STG Technical Conferences 2009 LAN Console for i on Power Blade Required for i on Power blade Uses System i Access software on PC (can use same PC for IVM connection) Full console functionality Uses existing LAN console capability 43 Power Blades Implementation © 2009 IBM Corporation
  • 44. STG Technical Conferences 2009 PowerVM Active Memory Sharing PowerVM Active Memory Sharing is an advanced memory virtualization Around the World 15 technology which intelligently flows memory from one partition to another for increased utilization and flexibility of memory usage Memory Usage (GB) 10 Asia Americas Memory virtualization enhancement for Power Systems Europe 5 – Partitions share a pool of memory – Memory dynamically allocated based on partition’s workload 0 demands Time Extends Power Systems Virtualization Leadership Day and Night 15 – Capabilities not provided by Sun and HP virtualization offerings Memory Usage (GB) 10 Night Designed for partitions with variable memory requirements Day – Workloads that peak at different times across the partitions 5 – Active/inactive environments 0 – Test and Development environments Time – Low average memory requirements Infrequent Use 15 Available with PowerVM Enterprise Edition #10 #9 – Supports AIX 6.1, i 6.1, and SUSE Linux Enterprise Server 11 Memory Usage (GB) #8 10 #7 – Partitions must use VIOS and shared processors #6 #5 – POWER6 processor-based systems 5 #4 #3 #2 0 Time #1 44 Power Blades Implementation © 2009 IBM Corporation
  • 45. STG Technical Conferences 2009 IVM Example: Working with AMS 45 Power Blades Implementation © 2009 IBM Corporation
  • 46. STG Technical Conferences 2009 Enhancements for IBM i and Power Blades N_Port ID Virtualization (NPIV) Support for IBM i – Provides direct Fibre Channel connections from client partitions to SAN resources – Simplifies the management of Fibre Channel SAN environments – Enables access to Fibre Channel tape libraries – Supported with PowerVM Express, Standard, and Enterprise Edition – Power blades with an 8Gb PCIe Fibre Channel Adapter VIOS FC Adapter Virtual FC Adapter Virtual FC Adapter Power Hypervisor 46 Power Blades Implementation © 2009 IBM Corporation
  • 47. STG Technical Conferences 2009 Virtual SCSI NPIV IBM i generic generic IBM i scsi disk scsi disk EMC SCSI FCP VIOS VIOS FC HBAs FC HBAs SAN SAN DS5000 SVC The VSCSI model for sharing storage resources is With NPIV, the VIOS' role is fundamentally different. storage virtualizer. Heterogeneous storage is The VIOS facilitates adapter sharing only, there is pooled by the VIOS into a homogeneous pool of no device level abstraction or emulation. Rather block storage and then allocated to client LPARs in than a storage virtualizer, the VIOS serving NPIV is the form of generic SCSI LUNs. The VIOS performs a passthrough, providing an FCP connection from SCSI emulation and acts as the SCSI target. the client to the SAN. 47 Power Blades Implementation © 2009 IBM Corporation
  • 48. STG Technical Conferences 2009 Additional 4Q Enhancements for IBM i on Blade Support for IBM i (through VIOS) and AIX for CFFh 1Gb Eth/8Gb FC combo card – Supported on JS12, JS22, JS23, JS43 – Only adapter with NPIV support for JS12 and JS22 QLogic 1Gb Ethernet and 8Gb Fibre Channel Expansion Card (CFFh) – FC ports supported only, not Ethernet Converged Network Adapter with support for 10Gb Ethernet and 8Gb FC (FC over Ethernet) – FC support for IBM i is with VSCSI only – NPIV not supported 10 GbE/8Gb FC Converged Network Adapter (CFFh) 48 Power Blades Implementation © 2009 IBM Corporation
  • 49. STG Technical Conferences 2009 IBM i and BladeCenter S System & Metode, Denmark www.system-method.com • IBM Business Partner • Software Solutions & Hosting company Focuses on very small / old existing installations • 1 BladeCenter S chassis • 1 JS12 POWER6 blade • 2 HS21 x86 blades • Provides hosting services to several clients/companies •1 IBM Virtual IO Server 2.1 (VIOS) host LPAR •3 IBM I 6.1 client LPARs – for different customers Pros: • Cheap hardware compared to traditional Power servers • Possible to get customers that would potentially have switched to the “dark side…” • Flexible Cons: • Complex, requires three different skills sets (Blade, VIOS, IBM i) • Difficult backup in early stages (2 step process). Now great with virtual tape. 49 Power Blades Implementation © 2009 IBM Corporation
  • 50. STG Technical Conferences 2009 IBM Systems Lab Services Virtualization Program What is it? – Free presales technical assistance from Lab Services – Help with virtualization solutions: – Open storage – Power blades – IBM Systems Director VMControl – Other PowerVM technologies – Design solution, hold Q&A session with client, verify hardware configuration Who can use it? – IBMers, Business Partners, clients How do I use it? – Contact Lab Services for nomination form; send form in – Participate in assessment call with Virtualization Program team – Work with dedicated Lab Services technical resource to design solution before the sale 50 Power Blades Implementation © 2009 IBM Corporation
  • 51. STG Technical Conferences 2009 Service Voucher for IBM i on Power Blade • Let IBM Systems Lab Services and Training help you install i on blade! • 1 service voucher for each Power blade AND IBM i license purchased • http://www.ibm.com/systems/i/hardware/editions/services.html 51 Power Blades Implementation © 2009 IBM Corporation
  • 52. STG Technical Conferences 2009 Further Reading IBM i on Blade Read-me First: http://www.ibm.com/systems/power/hardware/blades/ibmi.html IBM i on Blade Supported Environments: http://www.ibm.com/systems/power/hardware/blades/ibmi.html IBM i on Blade Performance Information: http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html Service vouchers: http://www.ibm.com/systems/i/hardware/editions/services.html IBM i on Blade Training: http://www.ibm.com/systems/i/support/itc/educ.html 52 Power Blades Implementation © 2009 IBM Corporation
  • 53. STG Technical Conferences 2009 Trademarks and Disclaimers 8 IBM Corporation 1994-2007. All rights reserved. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at http://www.ibm.com/legal/copytrade.shtml. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. UNIX is a registered trademark of The Open Group in the United States and other countries. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Prices are suggested U.S. list prices and are subject to change without notice. Starting price may not include a hard drive, operating system or other features. Contact your IBM representative or Business Partner for the most current pricing in your geography. Photographs shown may be engineering prototypes. Changes may be incorporated in production models. 53 Power Blades Implementation © 2009 IBM Corporation
  • 54. STG Technical Conferences 2009 Special notices This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area. Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785 USA. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied. All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions. IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice. IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies. All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally- available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for their specific environment. Revised September 26, 2006 54 Power Blades Implementation © 2009 IBM Corporation
  • 55. STG Technical Conferences 2009 Special notices (cont.) IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, IBM i, IBM i (logo), IBM Business Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000, HASM, IBM Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2, POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10, Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org. UNIX is a registered trademark of The Open Group in the United States, other countries or both. Linux is a registered trademark of Linus Torvalds in the United States, other countries or both. Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both. Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both. AMD Opteron is a trademark of Advanced Micro Devices, Inc. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both. TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC). SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are trademarks of the Standard Performance Evaluation Corp (SPEC). NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both. AltiVec is a trademark of Freescale Semiconductor, Inc. Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc. InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association. Other company, product and service names may be trademarks or service marks of others. Revised April 24, 2008 55 Power Blades Implementation © 2009 IBM Corporation