4
HP BladeSystem
BladeSystem is a line of blade server machines from Hewlett
Packard that was introduced in October 2004.
The BladeSystem forms part of the HP Converged Systems,
which use a common converged infrastructure architecture for
server, storage, and networking products.
Designed for enterprise installations of 100 to more than 1,000
Virtual machines, the HP ConvergedSystem is configured
with BladeSystem servers.
When managing a software-defined data center, a System
administrator can perform automated lifecycle management
for BladeSystems.
Blades allow a company to simplify its entire infrastructure
through consolidation, integration, and unified management.
• Consolidate the data
center
• Migrate from an old
data center into a bladed
one
• Upgrade the data center
infrastructure to a more
effective solution
• Reduce costs
• Adapt to future changes
• Save energy
• Manage the data center
A BladeSystem offers
redundancy for parts
and paths that can fail,
as wellas reliability for
any nonredundant
components. An HP
BladeSystem solution can
help customers operate
more efficiently, grow
their business, and
control IT infrastructure
costs. The HP
BladeSystem solution can
help a business:
6
BladeSystem Advantages
The HP BladeSystem solution begins with an enclosure that
holds the various hardware elements. The enclosure has
built‐in resources that result in a modular, self‐tuning unit with
built‐in intelligence.
All blades in an enclosure connect to a common midplane,
allowing administrators to centrally monitor and manage
them. HP BladeSystem enclosures are flexible and scalable.
You can add server blades, storage blades, and interconnects as
requirements evolve.
We will look first at the two types of BladeSystem enclosures.
Next, we will overview common server blades. Finally, we will
review two other components: storage blades and
interconnects.
7
BladeSystem components
8
Blade Enclosures
• c7000 enclosure
• c3000 enclosure
There are two types of rack‐mount
BladeSystem enclosures:
c7000 c3000
Device bays 16 8
Interconnect bays 8 4
Power supplies Up to 6 x 960W/2250W
Output
Up to 6 x 2400W Output
Up to 6 x 800/900/1200W
PSU
Input power HP 2250W Single Phase AC
100 - 120V
200 - 240V
HP 2400W Single Phase AC
200 - 240V
Three Phase NA/JPN
200 - 208V Delta
Three Phase International
380 - 415V Wye
Worldwide -48V DC
Single Phase AC
100 - 120V
200 - 208V
-48V DC
Cooling Up to 10 fans Up to 6 fans
12
c3000 & c7000 Enclosures
13
Onboard Administrator
Device bays are located in the front of the enclosure, as shown in
Figure. The Onboard Administrator HP Insight Display, which
provides a menu‐driven interface for initial configuration and
troubleshooting, is also located on the front of the enclosure.
14
Onboard Administrator
Interconnect bays, power supplies, and fans are located in the
rear of the enclosure. There is also an optional KVM (keyboard
video mouse) switch connector and connectors for iLO
(Integrated Lights-Out) and Onboard Administrator.
15
Insight Display
HP Insight Display is a standard component of the c3000 and c7000
enclosures. It provides an interface that can be used for initial
enclosure configuration, but it is also a valuable tool during the
troubleshooting process. If a problem occurs, the display changes color
and blinks to get the attention of an administrator. The Insight Display
can even be used to upgrade the Onboard Administrator firmware.
USB Key Menu allows
an administrator to
update OA firmware or
to save or restore the
OA configuration when
using a USB stick
plugged into the USB
port on an OA module.
16
Insight Display
As shown in Figure above, the following menus are available
to an administrator standing in front of the blade enclosure:
• Displays the current condition of the enclosure.
Health
Summary
• Enables configuration of the enclosure, including Power
Mode, Power Limit, Dynamic Power, IP addresses for OA
modules, enclosure name, and rack name. It is also used
for connecting a DVD drive to the blades and setting the
lockout PIN.
Enclosure
Settings
• Displays the current enclosure configuration.
Enclosure
Info
• Presents basic information about the server blade
configuration and port mapping.
Blade or Port
Info
17
Insight Display
• Turns on the enclosure identification LED. When this is
selected, the display background color changes to blue,
and a blue LED is visible at the rear of the enclosure.
Turn Enclosure
UID on
• Displays six lines of text, each containing a maximum of
16 characters. This screen can be used to display contact
information or other important information for users
working on‐site with the enclosure.
View User Note
• Enables communication between the person standing in
front of the enclosure and the administrator managing
the enclosure through the Onboard Administrator.
Chat Mode
• Allows an administrator to update OA firmware or to
save or restore the OA configuration when using a USB
stick plugged into the USB port on an OA module.
USB Key Menu
18
Onboard Administrator
The OA allows you to
view information
about the enclosure
and to manage
various enclosure,
blade, and
interconnect
configuration
parameters. When the
enclosure is connected
to a management
network, you can
access the OA through
a browser or an SSH
connection. Figure
shows the login screen
that you see when you
access the OA through
a browser.
19
Onboard Administrator
The first time that you log on, the Welcome screen of the First Time
Setup Wizard is displayed, as shown in Figure. The wizard guides you
through the initial configuration of key enclosure and enclosure
management settings.
21
Onboard Administrator
The Enclosure Selection screen shows the enclosures detected by the wizard.
The Configuration Management screen allows you to apply a saved
configuration to the enclosure.
The Rack and Enclosure Settings screen allows you to enter identification
information for the enclosure and configure the time.
The Administrator Account Setup screen allows you to change the password
for the Administrator account and associate it with a full name and contact
information.
The Local User Accounts screen allows you to create up to 30 user accounts
that can be granted or denied access to manage specific devices.
Enclosure Bay IPAddressing (EBIPA). To be accessed on the network, a
device or interconnect needs to have IP configuration settings, and those
settings must be properly configured.
22
Onboard Administrator
The Directory Groups screen allows you create LDAP groups that
you can use to assign access.
The Directory Settings screen allows you to identify the IP address
and port used to access the LDAP server.
The Onboard Administrator Network Settings screen allows you
to determine whether the OA is accessed by a static IP address or
an IP address assigned through DHCP.
By using the SNMP Settings screen you can forward alerts to the
specified SNMP alert destinations from the enclosure. You need to
enter the host and the community string for each destination.
The Power Management screen allows you to configure power
management settings.
A server blade is basically a server on a card, a single
motherboard that contains a full computer system, including
one or more processors, memory, network connections, and
associated electronics.
•Full-height (8U)
•Half-height (4U)
Server blades are available in
two sizes with different
memory densities:
( U – Unit; 44,45 мм or 1,75’ )
Full‐height blades have more processing power, storage
capacity, and memory than half‐height server blades.
Full‐height server blades are recommended for enclosures
that run mission‐critical services.
23
Server Blades
24
HP ProLiant BL4xxc Gen8 Series
HP ProLiant BL420c
Gen8
HP ProLiant BL465c
Gen8
HP ProLiant BL460c
Gen8
Processor family Intel Xeon E5-2470 AMD Opteron 6380 Intel® Xeon® E5-
2600
Number of
processors
1 or 2 1or 2 1 or 2
Processor cores
available
4 or 6 or 8 4 or 8 or 12 or 16 2, 4, 6 or 8
Memory slots 12 slots DIMM 16 slots DIMM 16 slots DIMM
Memory type DDR3 LRDIMM,
RDIMM, LVDIMM
or UDIMM
DDR3 RDIMM,
LRDIMM or
UDIMMDIMM
DDR3 LRDIMM,
RDIMM, VDIMM,
UDIMM
Memory 384 ГБ 512 ГБ 512 ГБ
Network
controller
FlexFabric 554FLB FlexFabric 10 FlexFabric 10
Internal Storage 2 x 1.2 TB (SFF
SAS/SATA/SSD)
2 x 1.2 TB (SFF
SAS/SATA/SSD)
2 x 1.2 TB (SFF
SAS/SATA/SSD)
Infrastructure
management
iLO, Insight Control iLO, Insight
Control
iLO (Standard)
25
HP ProLiant BL4xxc Gen8 Series
HP ProLiant BL420c Gen8 HP ProLiant BL465c Gen8 HP ProLiant BL460c Gen8
26
HP ProLiant BL4xxc Gen9 Series
HP ProLiant BL460c Gen9
Processor family Intel® Xeon® E5-2690v3 v4 Series
Number of
processors
2
Processor cores
available
4, 6, 8, 10, 12, 14, 16, 20 ,22
Memory slots 16 slots DIMM
Memory type DDR3 LRDIMM, RDIMM,
LVDIMM, UDIMM
Memory 512 ГБ
Network
controller
One 20Gb 2-port FlexFabric FLB
Internal Storage 2 x 2 TB (SFF SAS/SATA/SSD)
Infrastructure
management
HP iLO (Firmware: HP iLO
Management)
27
HP ProLiant BL4xxc Gen10 Series
HP ProLiant BL460c Gen10
Processor family Intel® Xeon® Scalable 8100, 8200
Series
Number of
processors
2
Processor cores
available
4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26
Memory slots 16 slots DIMM
Memory type DDR3 LRDIMM, RDIMM,
LVDIMM, UDIMM
Memory 2 ТБ
Network
controller
One 20Gb 2-port FlexFabric FLB
Internal Storage 2 x 3,84 TB (SFF SAS/SATA/SSD)
Infrastructure
management
HP iLO
• Storage
virtualization
• Server
virtualization
• Client
virtualization
• Network
virtualization
Virtualization is a
proven technology
that is rapidly
transforming the IT
landscape and
fundamentally
changing the way
that people compute.
Virtualization
encompasses four
areas:
29
Virtualization
30
Storage virtualization
Almost everything in business and data center operations
comes down to information. This includes how information is
served to applications and users, how information is secured
to reduce risk, and how you can extract more value from the
information.
Companies are increasingly concerned about optimizing data
storage requirements to ensure that data is accessible when it
needs to be and protected against device failure, without
wasting valuable storage capacity.
Storage virtualization allows you to create pools of storage,
which you can then allocate as required.
There are two types of storage virtualization: Storage Area
Network (SAN) and Network Attached Storage (NAS).
One of the benefits of a blade solution is that storage
can be integrated within the device and shared by the
blades in the enclosure, or dedicated to a specific
blade. HP BladeSystem c‐Class enclosures support
the following types of storage blades:
• Direct-attached storage blade
• NAS (Network Attached Storage) storage blades
• EVA (Enterprise Virtual Array) for BladeSystem
SAN (Storage Area Network) storage
• Tape blades
31
Storage Blades
Nearly every server requires a connection to one or
more networks. BladeSystem supports network
connectivity through:
• Pass‐thru
• Blade switches
• HP Virtual Connect
32
Storage Interconnects
• A BladeSystem technology that simplifies
the addition of server blades by virtualizing
server connections to LANs and SANs.
HP Virtual
Connect
33
Storage Area Network (SAN)
A Storage Area Network (SAN) provides block‐level
sharing of storage between multiple computer systems.
• A volume that provides raw storage capacity.
Various operating systems can implement
their file system on top of block-level storage.
Data is written to and read from the storage
device by referencing a block address.
Block-level
storage
In a SAN, a large number of disk drives are pooled
together in storage arrays and are simultaneously
accessed by multiple computer systems over a
dedicated network.
34
Storage Area Network (SAN)
The colors show that certain
servers have access to certain
disks, and in some cases, only
certain portions of certain
disks. It is possible for entire
disks to be dedicated to
servers, or only parts of disks
to be dedicated to servers. The
above illustration is
intentionally oversimplified.
We will discuss this topic in
more detail shortly.
Although the storage is accessed over a dedicated network and shared by
multiple computers, the operating system perceives the storage as if it is
locally attached. This allows the operating system to access the storage
the same way that it accesses Direct Attached Storage (DAS).
35
Storage Area Network (SAN)
Three major protocols are available for SANs:
• Fibre Channel (FC)
• Fibre Channel over Ethernet (FCoE)
• iSCSI
• A protocol used to transport SCSI
commands over FC networks.
Fibre Channel
(FC)
• Pronounced “eye-scuz-ee.” The iSCSI
protocol carries SCSI traffic over
traditional TCP/IP networks, making it
very flexible and relatively cheap to deploy.
Internet Small
Computer System
Interface (iSCSI)
• A protocol used to send SCSI commands
over the same 10 Gbps Ethernet network
used for TCP/IP networking, but without
the TCP/IP overhead.
Fibre Channel
over Ethernet
(FCoE)
36
Network Attached Storage (NAS)
Network-attached storage (NAS) is dedicated file storage that
enables multiple users and heterogeneous client devices to
retrieve data from centralized disk capacity. Users on a local
area network (LAN) access the shared storage via a standard
Ethernet connection. NAS devices typically do not have a
keyboard or display and are configured and managed with a
browser-based utility. Each NAS resides on the LAN as an
independent network node, defined by its own unique Internet
Protocol (IP) address.
What most characterizes NAS is ease of access, high capacity and
fairly low cost. NAS devices provide infrastructure to consolidate
storage in one place and to support tasks, such as archiving and
backup, and a cloud tier.
38
Network Attached Storage (NAS)
NAS is conceptually very similar to SAN, in that a
large number of disk drives are pooled together in
storage arrays and are accessed by multiple
computer systems. NAS arrays also provide
advanced features similar to those provided by
SAN storage arrays, including:
• Replication
• Cloning/Snapshots
• RAID
• Multiple drive types
• Advanced data resiliency (RAID)
39
Network Attached Storage (NAS)
One of the major differences between NAS and SAN
is that NAS does not usually have a dedicated storage
network. Instead, computer systems usually access
NAS storage over the shared corporate network, the
LAN or WAN, using TCP/IP.
SAN and NAS systems also use different protocols.
NAS provides file-level access and predominantly
uses file sharing protocols such as Network File
System (NFS) and Common Internet File System
(CIFS).
41
BladeSystem Interconnects
BladeSystem interconnections play a critical role in implementing
convergence technologies between the Ethernet network and storage
resources. Three options are supported, with each designed to meet
the needs of specific solution scenarios. These include:
• An unmanaged direct connection solution.
Pass‐thru
• A managed solution that is well matched for the
process of transitioning to server virtualization.
Switch
• A solution designed to allow administrators to
upgrade, replace, or move server blades within
their enclosures without changes being visible
to the external LAN and SAN environments.
Virtual
Connect
42
Pass-thru
A pass‐thru module is designed for those customers who want an
unmanaged direct connection between each server blade within the
enclosure and an external network device such as a switch or router.
A pass‐thru module is considered a good solution for a data center that
needs blade enclosures added to an existing network infrastructure. Pass-
thru modules are designed for installation directly into the blade
enclosure.
A pass‐thru module is designed to provide a low‐latency, one‐to‐one
connection between each server and the network.
43
Pass-thru
When you install pass‐thru modules as an interconnect
option, HP recommends installing Ethernet pass‐thru
modules in pairs to ensure redundant uplink paths to the
network. Since each server has a minimum of two NICs on
each blade, the addition of a second Ethernet pass‐thru
module assures a redundant path to the network switch,
router, or bridge.
With many Ethernet pass‐thru modules, ports are designed
to auto‐sense and self‐configure bandwidth operation on a
port‐by‐port basis, providing support for a wide variety of
network adapters. These ports are also designed to support
a non‐blocking architecture to help improve performance.
44
Pass-thru example
The HP 1Gb Ethernet Pass‐Thru Module for BladeSystem
c‐Class is a 16‐port Ethernet interconnect that provides a 1:1
nonswitched, nonblocking path between the server and the
network. This connectivity is especially useful when nine or more
ports are used in an enclosure.
The 1Gb Ethernet Pass‐Thru Module delivers 16 internal 1 Gb
downlinks and 16 external 1 Gb RJ‐45 copper uplinks. It is
designed to fit into a single I/O bay of the c‐Class enclosure. The
1Gb Ethernet Pass‐Thru modules should be installed in pairs to
provide redundant uplink paths.
45
Blade switch
Ethernet blade switches are designed for installation in
BladeSystem enclosures as a managed solution replacement
for existing switches. Ethernet blade switches provide
multiple Ethernet downlinks and uplinks. They are designed
to support high‐bandwidth applications, such as iSCSI
storage, video on demand, and high performance computer
clustering. This makes them a preferred solution for data
centers that are transitioning to server virtualization or
looking to consolidate bandwidth.
Ethernet blade switches can act as the interface between
blade servers and an external network device. You can create
a virtual stack of up to 16 switches with support for single IP
address management for the stack.
46
Blade switch Management Options
• Menu interface
• Command line
interface (CLI)
• Web browser interface
• ProCurve Manager
(PCM)
• ProCurve Manager
Plus (PCM+)
Ethernet blade
switches come
pre‐configured for
immediate use with
the HP c‐Class
BladeSystem server
blade enclosure. The
switches support
management
options that you
would expect to see
for a switch
solution. These
include:
• Port security based on 802.1x limits
access for unwanted users
• MAC address lockout to prevent specific
configured MAC addresses from
connecting to the network
• Secured access through SSH and
HTTPS (SSL)
• Secure File Transfer Protocol (SFTP) to
allow secure file transfer to and from the
switch
• Support for RADIUS and TACACS+
authentication
• Traffic forwarding between VLANs
(802.1Q) through IP forwarding
In addition to
management
support, blade
switches provide
enhanced
security over
pass‐thru
modules.
Typical security
features include:
47
Blade Switch Typical Security Features
48
HP 6120G/XG Blade Switch
We will now look at the HP 6120G/XG Blade Switch as an example
The HP 6120G/XG Blade Switch provides 16 internal 1 Gb
downlinks, four 1 Gb external copper uplinks, and two 1 Gb SFP
external uplinks, along with three 10 Gb external uplinks and a
single 10 Gb internal crossconnect.
The 6120G/XG blade switch is ideal for data centers in transition,
where a mix of 1 Gb and 10 Gb network connections are required.
HP 6120G/XG Ethernet Blade Switch Front Panel
Item Description
1 Port C1 (10Gbe-CX4)
2 Port X1 XFP (10GbE) slot*
3 Port X2 XFP (10GbE) slot*
4 Port S1 SFP (1GbE) slot**
5 Port S2 SFP (1GbE) slot**
6 Console port (USB 2.0 mini-AB connector)
7 Clear button
8 Ports 1–4 (10/100/1000BASE-T)
9 Reset button (recessed)
* Supports 10GBASE-SR XFP and 10GBASE-LR XFP pluggable optical transceiver modules
** Supports 1000BASE-T SFP, 1000BASE-SX SFP, and 1000BASE-LX SFP optical transceiver modules
49
50
Virtual Connect
HP Virtual
Connect simplifies
management in
virtualized
environments. It is
designed to
simplify
connections to
LAN and SAN
environments, and
in the process,
reduce cable
requirements and
ongoing
management
overhead.
51
Virtual Connect
Virtual Connect effectively virtualizes server connections.
Virtualizing server connections allows server
administrators to upgrade, replace, or move server blades
within their enclosures without changes being visible to the
external LAN and SAN environments.
Virtual Connect implements “wire‐once” technology for
simplified management between networks and servers.
Wire‐once technology can help you add, move, or change
servers in minutes, thus saving time and improving agility
This greater efficiency is accomplished through tools that
work together to manage a virtualized data center, letting
you consolidate thousands of VMs onto the single storage
system.
52
Virtual Connect
When the LAN and SAN connect to the pool of servers, the
server administrator uses a Virtual Connect Manager (VCM)
to define a server connection profile for each server. The
profile applies to the server bay, and if a blade is swapped out,
it applies automatically to the replacement. The connection
profile to the LAN and SAN remains constant, even as devices
in the BladeSystem enclosure are changed out.
Virtual Connect also makes storage solution deployment and
management simpler and less expensive. Storage solutions
usually include components such as server Host Bus Adapters
(HBAs), SAN switches/directors, optical transceivers/cables,
and storage systems. Management can be a major concern
because of the sheer number of components.
53
Virtual Connect
HP refers to Virtual Connect as LAN‐safe and SAN‐safe.
Virtual Connect ports at the enclosure edge look like server
connections, so the process is transparent to LAN and SAN.
Transparency helps to ensure that Virtual Connect works
seamlessly with your external network.
Virtual Connect modules supporting Fibre Channel must
attach to NPIVcapable SAN switches. Most enterprise‐class
SAN switches today support NPIV. Depending on the
module, Virtual Connect‐Fibre Channel modules can
aggregate up to 255 physical or virtual server HBA ports
through each of the module’s uplink ports.
54
Virtual Connect Connections
The Virtual Connect modules plug directly into the interconnect bays of
the enclosure. The modules can be placed side by side for redundancy.
Each Virtual Connect Ethernet module has several numbered Ethernet
connectors. All of these connectors can be used to connect to data center
switches, or they can be used to stack Virtual Connect modules and
enclosures as part of a single Virtual Connect domain.
55
Virtual Connect Connections
Networks must be defined within the VCM so that specific named
networks can be associated with specific external data center
connections. These named networks can then be used to specify
networking connectivity for individual servers.
A single external network can be connected to a single enclosure uplink,
or it can make use of multiple uplinks to provide improved throughput
or higher availability. In addition, multiple external networks can be
connected over a single uplink (or set of uplinks) through the use of
VLAN tagging.
• Enclosure name
• Interconnect bay containing the
Virtual Connect Ethernet module
• Selected port on that module (1‐8,
X1, X2, . . .)
Mapping each network to a
specific external port is the
simplest approach to
connecting the defined
networks to the data center.
An external port is defined
by the following:
57
Virtual Connect Manager
This VCM home page provides access for the management
of enclosures, servers, and networking. It also serves as the
launch point for the initial setup of VCM.
The VCM navigation system consists of a tree view on the
left side of the page that lists all of the system devices and
available actions. The tree view remains visible at all times.
The right side of the page displays details for the selected
device or activity, which includes a pull‐down menu at the
top. To view detailed product information, select About HP
VCMVCM from the Help pulldown menu.
58
Virtual Connect Manager
• Manage enclosure connectivity
• Define available LANs and SANs
• Set up enclosure connections to
the LAN or SAN
• Define and manage server I/O
profiles
The VCM has
the following
functions:
The VCM contains utilities and a Profile Wizard to develop
templates to create and assign profiles to multiple servers at
once. The I/O profiles include the physical NIC MAC
addresses, Fibre Channel HBA WWNs, and the SAN boot
configurations. The VCM profile summary page includes a
view of server status, port, and network assignments. It also
lets you manage profile details.
• 1. Create a VC domain.
• 2. Define Ethernet
networks.
• 3. Define FC SAN
connections.
• 4. Create server profiles.
• 5. Manage data center
changes.
The Virtual
Connect
configuration
process uses
a consistent
methodology.
This includes
the following
tasks:
59
Virtual Connect Configuration
60
Create a VC Domain
One of the first requirements in setting up a VC environment is to
establish a VC Domain through the web‐based VCM interface.
61
Create a VC Domain
A Virtual Connect domain consists of an enclosure and a
set of associated modules and server blades that are
managed together by a single instance of the VCM. The
Virtual Connect domain contains specified networks,
server profiles, and user accounts that simplify the setup
and administration of server connections.
Establishing a Virtual Connect domain enables
administrators to upgrade, replace, or move servers
within their enclosures without changes being visible to
the external LAN/SAN environments.
63
Define Ethernet Networks
• Identifies the MAC
addresses to be used on
the servers deployed
within this Virtual
Connect domain.
• Sets up connections from
the HP c‐Class enclosure
to the external Ethernet
networks.
The Network Setup Wizard
establishes external
Ethernet network
connectivity for the HP
BladeSystem c‐Class
enclosure using HP Virtual
Connect. A user account
with network privileges is
required to perform these
operations. The Network
Setup Wizard does the
following:
These connections can be uplinks dedicated to a specific Ethernet
network or shared uplinks that carry multiple Ethernet networks
with the use of VLAN tags.
64
Define FC SANs
The Virtual Connect Fibre Channel Setup Wizard configures external
Fibre Channel connectivity for the HP BladeSystem c‐Class enclosure
using HP Virtual Connect
65
Define FC SANs
• Identifies WWNs to be
used on the server blades
deployed within this
Virtual Connect domain.
• Defines available SAN
fabrics.
A user account with
storage privileges is
required to
perform these
operations. The
Virtual Connect
Fibre Channel
Setup Wizard does
the following:
The Virtual Connect Manager Server Profile Wizard allows
you to quickly set up and configure network/SAN connections
for the server blades within your enclosure.
66
Create Server Profiles
With this wizard, you can define a server profile template that identifies
the server connectivity to use on server blades within the enclosure
67
Create Server Profiles
The template can then be used to automatically
create and apply server profiles to up to 16 server
blades. The individual server profiles can be
edited independently. Before beginning the server
profile wizard, you must do the following:
• Complete the Network Setup Wizard.
• Complete the Fibre Channel Setup Wizard (if
applicable).
• Ensure that any blades to be configured using
this wizard are powered off.
68
Create Server Profiles
The server profile wizard defines a server profile template, assigns server
profiles, and names server profiles. To set up a server profile, use the
following steps:
• 1. Configure the server profile using the VCM user interface.
• 2. Insert the server blade. VCM detects that a server blade was inserted and
reads the FRU data for each interface. VCM writes the server profile
information to the server.
• 3. Power on the server. CPU BIOS and NIC/HBA option ROM software write
the profile information to the interface. The server boots using the server profile
provided. When a blade is inserted into a bay that has a VCM profile assigned,
the VCM detects the insertion through communications with the OA and must
generate profile instructions for that server before the server is allowed to
power on. If VCM is NOT communicating with the OA at the time the server is
inserted, the OA continues to deny the server power request until the VCM has
updated the profile.
• 4. If a server is not powering on, verify that your VCM has established
communications with that OA.
69
Manage Data Center Changes
Once the VC domain is configured, it is easier to manage
data center changes.
• Replace a failed server without logging
in to VCM because the server profile is
assigned to the bay.
• Copy a server profile from one bay to
another.
• Change a server’s network or SAN
connections while the system is running.
• Move a profile for a failed server to a
spare server.
• Assign a profile to an empty server bay
for future growth.
With a VC
domain
configured,
you can do
the
following:
70
After Configuring VC domain
71
BladeSystem Storage Modules
As mentioned earlier, storage is a key
component in convergent technologies. You
were introduced to virtualized storage options,
SAN and NAS, earlier in the chapter. Blades
also support direct attached storage (DAS)
modules installed in the BladeServer enclosure,
supporting applications for which this is a
requirement. BladeServer tape blades give you
an option for including locally installed tape
modules as another storage option.
72
HP Tape Blades
Tape blades from HP provide direct‐attach data protection for the
adjacent server blades and network backup protection for all data
residing within the enclosure. Some tape blades are shown in Figure.
75
Hypervisor introduction
Server virtualization is a primary component in building a
converged environment.
In this environment, virtual machines (VMs) run on a host
machine, giving you a way to support multiple servers on a
single hardware platform. The VMs have access to hardware
resources on the platform, such as network adapters for LAN
and SAN access.
The key to virtualization is the hypervisor, also known as a
Virtual Machine Manager (VMM).
This is the component that creates and runs the VM. You were
introduced to two hypervisors in the previous chapter:
Hyper‐V and VMware vSphere ESXi. We will be taking
another look at these, including a closer look at how they can
fit into your converged technology plans.
76
Server virtualization
Virtualization lets you run multiple virtual machines (VMs) on a single
physical machine, sharing the resources of that single computer across
multiple environments. Different VMs can run different operating
systems and multiple applications on the same physical computer.
Hypervisors like VMware and Microsoft Hyper‐V make this possible.
77
Server virtualization
Hypervisor
• A hypervisor or Virtual Machine Manager (VMM) is computer software,
firmware or hardware that creates and runs virtual machines. A computer
on which a hypervisor runs one or more virtual machines is called a host
machine, and each virtual machine is called a guest machine. The
hypervisor presents the guest operating systems with a virtual operating
platform and manages the execution of the guest operating systems.
78
Server virtualization
• In computing, a virtual machine (VM) is an
emulation of a computer system. Virtual
machines are based on computer
architectures and provide functionality of a
physical computer. Their implementations
may involve specialized hardware, software,
or a combination.
• There are different kinds of virtual machines,
each with different functions:
• System virtual machines (also termed full
virtualization VMs) provide a substitute for a
real machine. They provide functionality
needed to execute entire operating systems.
• Process virtual machines are designed to
execute computer programs in a platform-
independent environment.
Virtual
machine
(VM)
79
Server virtualization
HP ProLiant servers support multiple virtualization solutions
including VMware vSphere and Microsoft Hyper‐V.
Microsoft Hyper-V is designed to offer "enterprise-class
virtualization" for organizations with a data center or hybrid
cloud. This option is a common choice for organizations who want
to virtualize workloads, build a private cloud, scale services
through a public cloud, or combine all three.
VMware vSphere is a popular hypervisor choice for organizations
hoping to achieve some degree of virtualization. Now on version
6.0, vSphere is highly configurable, which can make it an
attractive choice for companies that are either going fully virtual
or opting for a hybrid approach.
80
VMware vSphere
VMware vSphere,
which is based on
VMware ESXi, provides
a virtualization layer
that abstracts the
processor, memory,
storage, and networking
resources of the physical
host into multiple VMs.
A vSphere Hypervisor
creates the foundation
for a dynamic and
automated data center.
• Windows Server 2003 (and later)
• Windows XP with Service Pack 3
(and later)
• Red Hat Enterprise Linux 5.2
(and later)
• SUSE Linux Enterprise 10 (and
later)
• CentOS 5.2-5.6 and 6.0-6.1
Hyper‐V is the
hypervisor available
on Windows Server
2008 and Windows
Server 2008 R2. It
supports various
server and client
operating systems as
guest operating
systems, including:
81
Hyper-V
A current list of supported guest operating systems is available at the following address:
• http://technet.microsoft.com/enus/library/cc794868(WS.10).aspx
82
Virtualization types
• Type 1, also
known as a native
or bare metal
hypervisor.
• Type 2, also
known as a hosted
hypervisor.
Hypervisors
typically
belong to
one of two
currently
available
types:
83
Type 1 Hypervisor
A type 1 hypervisor runs directly on the host system
hardware, as illustrated in Figure.
84
Type 1 Hypervisor
The hypervisor manages the guest
operating system, which is running at a
way, the hypervisor is able to give the
guest operating system access to
hardware resources.
VMware vSphere ESXi and Microsoft
Hyper‐V are both type 1 hypervisors.
85
Type 2 Hypervisor
Type 2 hypervisors run as an application within the host
operating system. The host operating system acts as an
additional layer between the system hardware and the
hypervisor. The guest operating system operates on an
additional level above the hypervisor. Operating systems
that support type 2 hypervisors include both Microsoft
Windows and Linux. VMware Workstation is one of the
most commonly seen examples of a type 2 hypervisor.
Microsoft Hyper‐V is sometimes mistaken for a type 2
hypervisor, but it is really a type 1. Virtual environments
are run directly on the hypervisor rather than through
the host operating system.
• Full virtualization
• Paravirtualization
• Hardware‐assisted
virtualization
You might be wondering
how a hypervisor handles
requests for hardware
resources from the guest
operating system. Three
virtualization
technologies used by
various hypervisors
accomplish this task.
These include:
We will now take a look at the characteristics and
benefits of each.
86
Virtualization Technologies
87
Full virtualization
With full virtualization, the hypervisor
performs binary translation for requests
from the guest operating system and passes
user requests directly to the hardware. The
advantage of full virtualization is that the
guest operating system does not know that
it is virtualized. Therefore, any operating
system can run in a virtual machine.
88
Paravirtualization
With paravirtualization, the guest
operating system is aware it is being
virtualized and makes calls to a
special interface of the virtualization
layer. The primary disadvantage of
paravirtualization is that only
limited guest operating systems
support it.
89
Hardware‐assisted virtualization
Hardware‐assisted virtualization can
be used on computers that have
virtualization‐aware CPUs like Intel
Virtualization Technology (VT‐x) and
AMD‐V. When a CPU supports
virtualization, the hypervisor can send
guest operating system requests to the
processor without translation.
90
Hypervisors
Our focus in this course is the two most popular type
1 hypervisors: Hyper‐V and VMware vSphere ESXi.
Hyper‐V comes embedded in Window Server 2008
(and later versions), but it is not enabled by default.
To use Hyper‐V, you must enable the Hyper‐V role
through Windows Server Manager.
Several different versions of VMware are available
for free download from the VMware website.
However, many VMware products, including
vSphere, require licensing.
91
Hyper-V
Windows Server 2008 Hyper‐V R2 offers a robust, scalable
hypervisor-based virtualization platform allowing
enterprises to provision and manage virtual server
workloads. Hyper‐V is supported on HP BladeSystem
servers with Windows Server 2008 Standard, Enterprise,
and Datacenter.
One of the most common uses of Hyper‐V is to create a
Virtual Desktop Infrastructure (VDI) environment. In a
VDI environment, VMs run Windows client operating
systems on the host computer. Users can access these from a
client device, such as a PC or thin client. This gives you a
way to centralize user desktops in a central location.
1. Support for native 64-bit hypervisor virtualization
2. Support for simultaneous single processor and multiple
processor VMs
3. Concurrent support for 32-bit and 64-bit VMs
4. VLAN support
5. Management support through the Microsoft Management
Console (MMC)
6. Scripting and management through documented Windows
Management Instrumentation (WMI) interfaces
7. Support for VM snapshots
92
Hyper-V Features
Component Maximum Additional notes
Logical processors 64 To support this maximum, hardware-assisted
virtualization and hardware-enforced Data
Execution Prevention (DEP) must be enabled in
the host system BIOS.
Virtual processors per
logical processor
8 or 12 (see note) Up to 12 virtual processors are supported for
each physical processor as long as all guest
operating systems are running Windows 7.
Otherwise, the limit is 8.
Virtual machines per
server
384 This maximum is for concurrently running VMs.
Virtual processors per
server
512
Maximum total storage No limit (see note). Hyper-V does not place a limit on the maximum
storage. The maximum is set by the host
operating system.
Memory 1 TB Practical memory is limited by the amount of
physical memory installed in the host machine.
Physical network
adapters
No limit Hyper-V does not place any limit on the number
of physical network adapters supported.
Virtual networks
(switches)
No limit Hyper-V does not impose a limit, but you may
be limited by available computing resources.
Virtual network switch
ports per server
No limit Hyper-V does not impose a limit, but you may
be limited by available computing resources.
93
Hyper-V virtual machines
94
Using Hyper-V
• 1. Install Hyper‐V.
• 2. Create and set up a VM.
• 3. Install the guest operating
system and integration
service.
• 4. Configure virtual
networking.
It is relatively
easy to get
Microsoft
Hyper‐V up and
running. To do
so, follow these
four basic steps:
We will not be discussing this process in great detail,
but it will be helpful to spend a little time mentioning
the basics of each step.
95
Install Hyper‐V
Saying that you must install Hyper‐V is a little
misleading. It installs with Windows Server
2008 (and later versions), but the Hyper‐V
role must be enabled in Windows Server
Manager. During the process you will be
prompted to choose one or more physical
network adapters to support VM connections
to the external network. You must restart the
host computer after setting up Hyper‐V to
make it available.
96
Creating VMs
VMs are created and managed through the Hyper‐V
Manager, which is installed with the Hyper‐V role.
97
Creating VMs
• VM name and location
(for storing the VM file)
• Memory
• Network connectivity
• Virtual hard disk name,
location, and size
• Guest OS installation
option
Hyper‐V
Manager
includes a
New Virtual
Machine
Wizard that
lets you define
your VM,
including:
98
Installing the guest operating system
The next step is installing the guest
operating system. You can install
from CD/ROM installation media,
ISO file, floppy disk, or a
network‐based installation server.
Installation from a network‐based
installation server requires access to
the external network.
99
Configuring virtual networking
You are prompted to configure virtual network settings for the
VM. You can configure one or more of the following virtual
network options for the VM:
• Connects to the physical network through a
physical network adapter installed in the host
machine.
External
network
• Provides a communication path between the
VM and the host computer.
Internal
network
• Configures a network that provides
communication among VMs running on the
same host, but only among VMs.
Private
network
100
VM External Network Settings
You can add a network adapter to a VM after installation, but only if the
VM is not running. When you do, you are prompted for the network
adapter to use, MAC address information, and an optional VLAN ID. The
Network Adapter settings for a new VM are shown in Figure.
Sample External Connections
101
If you create an external network, Windows Server 2008 uses a
virtual network adapter to connect to the physical network adapter.
In the Control Panel, Network Connections shows both the physical
network adapter and a virtual network adapter for each external
network configured on the host computer, as shown in Figure.
Sample External Connections
Virtual Local Area Connection
10 through physical Local Area
Connection 5.
Virtual Local Area Connection
11 through physical Local Area
Connection 7.
Virtual Local Area Connection
12 through physical Local Area
Connection 3.
102
Standard network protocols and services are bound to the virtual
adapter. The Microsoft Virtual Network Switch Protocol is bound to the
physical adapter to provide access to the external virtual network.
VMware
103
•Server and datacenter
•Desktop virtualization
•Cloud computing
•Application platform
•SMB solutions
•Apple Macintosh
solutions
Our main emphasis during this course
is on using VMware vSphere ESXi to
create and manage converged
solutions relying on virtualization.
VMware has long been the premiere
provider of virtualization
technologies. VMWare offers a wide
range of products targeted at meeting
your virtualization requirements. In
fact, the VMware website offers links
describing product suites and
technical information focusing on
various solutions, including:
Specific feature support and platform requirements vary by product. It
should be noted, however, that VMware offers products for use with
Windows operating systems, Linux, and the Mac OS.
104
VMware ESXi Platform
The vSphere
ESXi platform
is illustrated
in Figure.
ESXi is a
popular
hypervisor
choice for
creating a
dynamic and
automated
data center.
105
vSphere Client
Our main emphasis during this course is on
using VMware vSphere ESXi to create and
manage converged solutions relying on
virtualization. With that in mind, we will give
an overview here, and more detailed
information is provided in the next lecture.