SlideShare une entreprise Scribd logo
1  sur  14
Télécharger pour lire hors ligne
WHITE P APER
                                                               The Value of Memory-Dense Servers: IBM's System x MAX5
                                                               for Its eX5 Server Family
                                                               Sponsored by: IBM

                                                               Michelle Bailey
                                                               March 2010


                                                               IDC OPINION
www.idc.com




                                                               The technology industry has reached a crossroads. After more than a decade of
                                                               physical server sprawl, nearly exponential growth in storage, and a proliferation of
                                                               network technologies, IT organizations are now facing tremendous challenges in
                                                               planning for a future enterprise architecture that is less expensive, less complex, and
F.508.935.4015




                                                               more agile than today's infrastructure. At the core of this reinvention is virtualization
                                                               and, increasingly, a converged set of IT infrastructure that is built on a service-centric
                                                               approach to supporting the business. This new technology cycle is squarely aimed at
                                                               improving utilization rates, driving efficiency across the datacenter, and simplifying
                                                               deployment and ongoing maintenance in order to ultimately shorten time to market
P.508.872.8200




                                                               and optimize the business value from IT investments.

                                                               Many IT organizations are well on their way to creating a more flexible and
                                                               responsive enterprise architecture. Server virtualization has quickly become
                                                               mainstream and is the foundational platform for the datacenter. More than 50% of all
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA




                                                               server workloads are now deployed on virtual machines, and this is driving a sea
                                                               change in the types of technologies that IT organizations are procuring and
                                                               configuring and their approach to IT processes and practices.

                                                               We have already seen customers move toward more richly configured servers to
                                                               maximize the number of virtual machines (VMs) consolidated per physical server. The
                                                               correct balance of processor, memory, and I/O is critical in architecting an effective
                                                               virtualization solution. Initially, the emphasis on building physical systems for virtual
                                                               machines focused on multicore processors. However, with the maturity in
                                                               virtualization, most IT organizations now report that the single greatest limiter in
                                                               driving higher VM densities is tied to the amount of memory that their virtual machines
                                                               can access. Servers that were previously built to support single applications have
                                                               become inadequate in meeting the virtualization goals of customers.

                                                               Prior to virtualization, only the most demanding workloads required high memory
                                                               footprints — large databases, OLTP applications, and enterprise ERP and CRM
                                                               solutions. Today, because each virtual machine requires its own memory to ensure
                                                               consistent application performance, systems with large memory capabilities become
                                                               essential. As a result, new x86-based servers are coming to market that can
                                                               massively expand memory capacities.
With this change in technology comes a new set of metrics for measuring ongoing
success in virtualization. "Cost per application" or "cost per VM" is now used to gauge
the effectiveness of technology investments, and as a consequence, customers are
looking to match their consolidation goals with newer systems infrastructure that
helps maximize VM densities relative to physical hardware.



SITUATION OVERVIEW

A New Approach to Datacenter Economics Is
Required

For many years, IT organizations would install at least one physical server per
application, and often three to five servers per application, when taking into account
test/development, staging, and disaster recovery environments. This inevitably led to
an explosion in the number of physical systems and devices installed as well as
datacenter sites. Prior to virtualization, most IT organizations faced:

    Physical server sprawl. The number of installed physical servers has increased
    sixfold from just over 5 million in 1996 to more than 30 million in 2010.

    Overprovisioning and underutilized assets. Most applications consume a
    fraction of a standalone server's total capacity, averaging 5–10% CPU utilization
    of a typical x86 server.

    Spiraling operational costs. Most customers have underinvested in systems
    management and automation tools relative to the investments that have been
    made in x86 systems infrastructure. This has meant that many datacenters
    employ manually intensive processes, resulting in greater burdens on staff.

    Server sprawl that exacerbates the power and cooling challenges of aging
    datacenter facilities. The average age of a datacenter in the United States is 12
    years. This means that the typical datacenter was built to support a substantially
    different set of infrastructure that has become increasingly dense over time. Most
    datacenters were designed to support 1–2kW per rack versus 8–15kW per rack
    that we routinely observe.


Virtualization Is the Killer App for the
Datacenter

Virtualization technologies have completely transformed the way in which customers
build, deploy, and manage their systems infrastructure. Virtualization tools allow
multiple logical servers or "virtual machines" to run on a single physical server. By
consolidating applications onto fewer physical servers, customers have been able to
slow the sprawl of physical servers within their datacenters. In fact, today most
datacenters report that virtualization has become the default build for new server
installations (see Figure 1).




2                                             #222224                                     ©2010 IDC
Customers have realized three primary benefits in deploying virtualization
technologies:

    Physical server consolidation. Consolidation remains the main driver for
    deploying virtualization today. By consolidating multiple virtual machines on a
    single physical server, customers have less server hardware to purchase and
    fewer installed servers. The most direct benefits are server hardware savings
    and, consequently, fewer hardware maintenance agreements. Other benefits
    include reduced energy demands for the datacenter and lower requirements for
    floor space and rack space. This consolidation helps in reducing staff burdens for
    purchasing, deployment, and hardware maintenance; however, customers have
    yet to see any significant benefit from application and OS management.

    Improved availability and disaster recovery. Mobility tools enable the
    migration of a virtual machine from one piece of physical server hardware to
    another. Customers have found these technologies particularly useful for
    reducing planned downtime and alleviating the pressure on shrinking
    maintenance windows. Mobility tools are also used to combat unplanned
    downtime and can be used alone or in conjunction with existing tools such as
    clustering and replication. Over time, we expect that customers will be able to
    regularly move virtual machines not just across the datacenter floor but also from
    one site to another, creating a new paradigm for disaster recovery.

    Improved flexibility. Virtualization has allowed customers to be more
    responsive to the business. Virtual server deployments can literally reduce the
    time to deploy a server to minutes compared with days or even weeks for
    physical server deployments, meaning that time to market is significantly
    reduced. Virtualization also decouples the server hardware from the application
    so that maintaining legacy applications is greatly simplified.




©2010 IDC                                    #222224                                     3
FIGURE 1

Server Virtualization Adoption
Q.    Which of the following statements most closely describes the build decision for new server
      hardware at your organization?


     Virtualization is the default build for new
      server hardware unless a case can be
    made for a standalone, unvirtualized server

      Standalone servers are the default build,
        but we strongly advise or incent our
       application owners to use virtualization
                   where possible

      Standalone servers are the default build,
       and we will suggest virtualization with
       application owners but will not push it

     Standalone servers are the default build,
     and we will deploy virtualization only if our
               customers request it

                                                                  0     10       20      30     40   50
                                                                             (% of respondents)

n = 400
Source: IDC's Server Virtualization Multiclient Study, 2009




The Impacts of Mainstream Server
Virtualization Adoption

Given the broad adoption of virtualization, the physical server market has changed
substantially and the number of installed servers worldwide is leveling off. However,
at the same time, the number of virtual machines is exploding. This "virtual server
sprawl" is already having a profound impact on IT operations and procurement
strategies.


Virtual Machine Sprawl a Rising Datacenter Cost
IDC expects that more than 50 million virtual servers and just 30 million physical
systems will be installed by 2013, resulting in more than 80 million logical machines
(see Figure 2).




4                                                             #222224                                     ©2010 IDC
FIGURE 2

New Economic Model for the Datacenter
Shifts to Automation Tools Are a Requirement




Source: IDC, 2009




Virtual Machine Densities on the Rise
The rapid growth in the number of virtual machines is due not just to the growing
proportion of servers being virtualized but also to the growing number of virtual
machines installed per physical server.

After years of building in overhead on hardware resources to help guarantee service-
level agreements (SLAs), most customers had modest goals for increasing the
utilization of their servers. Many report an ideal of moving from 5% or 10% utilization
for standalone servers to 30% or 40% utilization for virtual servers. This has meant
that on average, the number of VMs per server has been approximately 6 to 1.
Figure 3 demonstrates the average number of VMs deployed per physical server,
according to a recent survey of 400 systems administrators. While a consolidation
ratio of 6 VMs per server is the average, IDC routinely sees customers standardizing
on consolidation ratios of 8:1 or 10:1 and leading-edge customers deploying 25, 30,
or even 40 VMs per physical server.




©2010 IDC                                     #222224                                     5
Changing Server Configurations to Optimize for Virtualization
IDC finds that IT organizations with more aggressive VM density goals are deploying
more richly configured systems with significantly higher memory installations (see
Figure 4). To achieve this increase in memory, customers will often buy servers with
higher processor counts for two reasons:

1.   The higher the socket count, the greater the access to physical memory.

2.   Servers with higher numbers of sockets tend to have higher numbers of DIMM
     slots on the motherboard.

Often, we find that customers that purchase systems with high core counts for
improved memory accessibility have underutilized processors.




FIGURE 3

Server Virtualization Densities, 2008



                                   20–24 VMs per                  25+ VMs per
                                   physical server               physical server
                                       (4.5%)                        (3.4%)

                                                                              1 VM per physical
                        15–19 VMs per                                           server (10.9%)
                        physical server
                            (4.5%)

                10–14 VMs per
                physical server
                   (10.2%)                                                     2–4 VMs per
                                                                              physical server
                         5–9 VMs per                                             (42.2%)
                        physical server
                           (24.3%)

                                                     n = 400

Source: IDC's Server Virtualization Multiclient Study, 2009




6                                                             #222224                             ©2010 IDC
FIGURE 4

Server Virtualization Densities by Memory Installed per Server


                              45                                                     41.7
   Average memory installed


                              40
                              35                                        32.3
                                                                 29.5
       per server (GB)




                              30
                              25             21.2
                              20
                              15   12.1
                              10
                               5
                               0
                                   <4         4–5                6–9    10–19         20+
                                               (Number of VMs per server)

n = 400
Source: IDC's Server Virtualization Multiclient Study, 2009




New Hardware Solutions Are Required for Substantial Increases in VM
Densities
IDC research shows that customers are expecting to achieve utilization rates of
60–80% on their hardware compared with 30–40% today. This type of utilization is on
par with that seen in mainframe technologies. To meet this goal, IT organizations
must make substantial changes in the way they purchase and configure their server
hardware. They must recognize that:

          Memory capacity is just as important as processor power in virtual server
          configurations. For the past several years, IT organizations have been taking
          advantage of improvements in multicore technology to drive up VM densities.
          Also, new hardware assist functionality built in to processors has helped reduce
          virtualization overhead and enabled I/O offloading. However, while processor
          improvements have been extremely beneficial, many customers now report that
          the biggest constraint to increasing VM densities lies in the ability to add memory
          to a system (see Figure 5).

          Virtualized servers have much richer configurations relative to standalone
          servers. IDC continues to see customers buying servers with large numbers of
          cores as well as large numbers of DIMM slots to support additional memory for
          virtualization. Typically, we see virtualized x86 servers with 28GB of RAM and a
          disproportionate number of 4–8 sockets compared with just 4GB RAM and 1–2
          sockets on unvirtualized servers. Servers with higher processor counts provide
          additional memory access by default because they typically have greater
          numbers of DIMM slots and higher overall memory capacities.




©2010 IDC                                                     #222224                           7
Physical memory can be severely limiting to VM densities. Virtual machines
    must have access to enough physical memory to start the VM and run the guest
    operating system as well as the application. Administrators have to specify either
    the total amount of system memory required or the maximum, minimum, and
    shared memory needed, depending on their choice of virtualization technology.
    With higher numbers of VMs per server, memory can quickly become
    overcommitted. So without extended memory solutions, IT organizations have to
    either limit the number of VMs per server (and therefore increase the number of
    physical servers installed) or increase the number of installed sockets per server
    to increase the amount of addressable memory on a system or purchase
    expensive high-capacity DRAM modules.

    Types of applications also impact the memory requirement for virtual
    servers. The size of an application also has a substantial impact on the number
    of VMs installed per server. The number of users, the active concurrency of
    these users, and the memory addressability requirements of the application play
    a large role in determining the VM density of a virtualized server. Database and
    OLTP applications, for example, have both high memory and I/O requirements
    and are not suitable candidates for virtualization with limited memory
    configurations and where there is overhead from the hypervisor.


Traditional Thinking Hampers VM Densities
IDC's research shows that as the number of cores on a virtual server increases, so
too does the memory configuration. VM densities also rise and then level off at just
under 10 VMs per server on average. Today, this is primarily because servers with
higher core counts are typically used to support higher-end workloads. VM densities
actually start to decline with 32 or more installed sockets due to the increased use of
richer applications on these multiprocessor servers. So rather than driving up VM
densities on these larger boxes, many customers are applying traditional thinking to
systems configuration — that is, that smaller applications run on smaller servers and
large applications run on larger servers.

Figure 6 displays the average amount of installed memory and the corresponding
number of virtual machines based on core count. Servers with four cores in total
(typically dual-socket, dual-core processor systems) average 14GB of installed RAM
and support just six virtual machines. This translates into approximately one core and
2.5GB of memory per VM. In contrast, a virtualized server with 32 or more cores
averages almost 45GB of total memory and just under nine virtual machines. This is
almost four cores and 5GB of memory per VM.

As the core count of these servers increases, so too does the prevalence of memory-
intensive applications such as business processing, Oracle Database, business
analytics, and collaborative applications (see Figure 7). As shown in Figure 6, VM
densities for servers with high core counts level off at 8.5 VMs per server.
Interestingly, customers are able to virtualize a broader set of applications as the core
count of the server increases. IDC expects that without a change to memory
capabilities, VM densities will stabilize on higher-end systems as customers deploy
more memory-intensive applications on these servers.




8                                              #222224                                      ©2010 IDC
FIGURE 5

Virtual Server Configuration Requirements: x86-Based Servers Only
Q.           Which of the following hardware components are mainly driving the richer configurations on
             your virtual servers?


                                       90
                                       80
      mentioned that component is
      driving richer configurations)
         (% of respondents who




                                       70
                                       60
                                       50
                                       40
                                       30
                                       20
                                       10
                                        0
                                               Memory      Processors       Storage    I/O devices        Other


n = 400
Note: Multiple responses were allowed.
Source: IDC's Server Virtualization Multiclient Study, 2009




FIGURE 6

Memory Density and VM Density by Server Core Count


                   50                                                                                6
                   45
                   40                                                                                5
                   35
     Memory (GB)




                                                                                                     4
                   30
                   25                                                                                3
                   20
                   15                                                                                2
                   10                                                                                1
                    5
                    0                                                                                0
                                       4 cores          8 cores         16 cores      32+ cores

                                            Average memory (GB)
                                            Average number of VMs
                                            Average number of cores per VM
                                            Average memory per VM (GB)

n = 400
Source: IDC's Server Virtualization Multiclient Study, 2009




©2010 IDC                                                         #222224                                         9
FIGURE 7

Virtual Server Workload Profile by Server Core Count




n = 400
Source: IDC's Server Virtualization Multiclient Study, 2009



Automation a Key Driver to Future Success in Virtualization
Most customers have invested far less in systems management and automation tools
relative to the investments that have been made in hardware virtualization. Consequently,
many datacenters still employ manually intensive processes to manage their virtual
machines. The processes are often based on the management of their physical machines.
For instance, even though most IT organizations will leverage mobility tools that enable
the movement of virtual machines from one physical server to another, most of this
migration is done using a combination of manual intervention and point tools, and typically
these VMs are moved for the purposes of maintenance (not failover). This movement
tends to happen monthly or quarterly and usually during off-hours.

While the success of virtualization has largely been built on server hardware savings,
the future success of an increasingly virtualized architecture is in automation.
Automation provides IT organizations with the ability to link workflow practices to an
"on-demand" and highly utilized infrastructure. Most importantly, automation enables
IT organizations to minimize the manually intensive tasks of systems administrators
and significantly lower maintenance costs that can be paralyzing to innovation. As a
result, customers are building a shared pool of compute, memory, I/O, and storage
upon which to support existing applications and launch new projects as well as
reduce datacenter power and cooling demands.



10                                                            #222224                         ©2010 IDC
Changing Thinking Required in the Use of Automation Tools to
Drive Up VM Densities
Most IT organizations are a long way from fully trusting workload-balancing tools that
could automate many of these tasks. IDC expects that if customers don't significantly
improve automation capabilities for their virtualized environments, IT management costs
will actually rise over the next five years as systems administrators struggle to maintain
a growing installed base of virtual servers that need to be patched, upgraded, and
secured as any physical server (see Figure 8). Without implementing automated
workload-balancing techniques, customers will have to continue to build in systems
overhead, which impacts the ability to more fully utilize system resources. Application
availability and performance will be at risk as bottlenecks will likely ensue on a system
that is maximized without the ability to seamlessly move in resources on demand.

As customers begin to build a new automation platform for their virtual environments,
memory-rich systems can bridge the movement to automation by providing the
appropriate headroom to successfully drive up VM densities.




FIGURE 8

New Economic Model for the Datacenter
Management Costs Shift to Virtual Servers




Source: IDC, 2009




©2010 IDC                                      #222224                                       11
IBM's Memory Extension Solution for
Virtualization and Databases

In response to customer requirements for higher memory footprints in virtualized
servers and for high-end databases, IBM has released its eX5 server line with its
MAX5 memory technology that can provide up to double the amount of physical
memory available per server relative to industry standards. The eX5 server line is the
fifth generation in IBM's Enterprise X-Architecture. IBM has been innovating around
Intel-based solutions since 2000 to create a more scalable x86-based architecture to
balance processing, memory, and I/O for higher-end workloads.

MAX5 is utilized across IBM's newly released eX5 servers in 2-socket, 4-socket, and
8-socket configurations for a maximum of 1TB, 1.5TB, and 3.0TB of total memory in
each of the respective systems with 16GB DRAM modules. These large memory
capacities are made possible by attaching the IBM System x MAX5 memory
expansion drawer, thereby increasing the number of available DIMM slots. The MAX5
memory expansion drawer provides 32 additional DIMM slots for each eX5 rack
server. Thus, a 2-socket server can be expanded to 64 DIMM slots, a 4-socket server
can be expanded to 96 DIMM slots, and each of the server chassis in an 8-socket
server can be expanded to 192 DIMM slots.


The Advantages of Memory-Dense Servers
IT organizations have been able to achieve substantial consolidation objectives with
virtualization to date, but in order for IT to continue to drive down costs in the
datacenter, additional improvements are needed within hardware solutions to drive up
VM densities. If customers are to consider more than 20 VMs per server, they will
need to procure servers with very high memory capabilities. Given that a proportional
increase in processor counts is not required, IDC believes that organizations will
increasingly look to a new set of server infrastructure that scales memory capacity
while optimizing for processor counts. There are multiple benefits to this type of
"memory-rich" system:

     Scale virtual server environments without installing new physical servers.
     By procuring servers with higher memory capabilities, IT organizations can choose
     to grow their installed base of virtual servers as their requirements increase
     without adding another physical server. Customers can scale their server
     environment by installing additional memory modules rather than installing a new
     server. This approach saves on not only hardware, real estate, and power and
     cooling but also time to order, builds, and deployment of a new piece of hardware.

     Choose DIMM counts, DRAM modules, and overall memory costs. By
     selecting servers with high numbers of DIMM slots, customers can choose to fill
     these DIMM slots with lower-cost 2GB and 4GB memory DRAM modules or
     maximize the available memory access with more expensive 8GB or 16GB
     DRAM modules. Customers can also decide if they want to fill up the DIMM slots
     with less expensive memory or use fewer, more expensive DRAM modules and
     allow for future expansion with free DIMM slots.




12                                            #222224                                     ©2010 IDC
Improve application choice for physical and virtual servers. Memory-rich
    servers can be used not only for delivering high numbers of virtual machines per
    server but also for hosting higher-end 64-bit workloads such as large databases
    and OLTP, ERP, or CRM solutions that are memory and/or I/O intensive and are
    sensitive to the overhead of virtualization. This type of architecture also makes
    virtualization of these higher-end workloads more realistic. While customers may
    choose to install fewer, larger VMs on these servers, they can still reap the
    additional benefits of virtualization, mainly higher availability and improved
    flexibility from mobility and deployment tools.

    Better leverage processor-based software pricing. For customers that have
    applications priced by socket or core, implementing memory-rich systems without
    an increase in socket or core count means that IT organizations can take
    advantage of existing software pricing and improve consolidation rates without an
    increase in software costs.

    Aid in migrating large databases to a virtual environment or x86
    architecture. With massively scalable memory architectures, x86 customers will
    have greater choice in where to run their large databases. Prior to these
    innovations, customers would typically deploy large databases on richly
    configured standalone systems. Memory capacities in excess of 1TB provide
    customers with significantly more options for migrating these databases from
    existing platforms. Memory-rich systems also open up the possibility of
    virtualizing these databases so that customers can exploit the advantages of
    mobility and rapid deployment that come with virtualization.

    Improve database performance by providing more memory addressability
    and memory sharing. IT organizations could choose to use memory-rich
    systems for the purposes of improving the performance of large databases on
    x86 platforms. Enhanced memory addressability lowers the thrash on system
    performance with memory-hungry databases and improves memory sharing.


CONCLUSION
IDC believes that a new IT business cycle has begun. Over the next 10 years, IT
organizations will be challenged to meet increasing demands from the business without
innovating around technology. At the same time, the expectation is to continue to drive
greater efficiencies and maximize IT budgets. As businesses become increasingly
connected and interconnected to technology, the need to support an ever-growing
portfolio of applications and analytics requires a smarter set of IT systems.

Virtualization will be at the heart of future datacenter transformations and
fundamentally requires a different set of systems that are tightly integrated and
purpose built for virtualization. This new generation of servers is designed from the
ground up to support virtual machines and will require large memory footprints to
optimize virtual workloads and large databases. These systems bring together server,
storage, and networking systems as well as automation tools that seek to reduce
management complexities that have become a burden for most large IT
organizations. While these systems will be more proprietary in nature, the trade-off is
in simplifying deployment and maintenance.




©2010 IDC                                     #222224                                     13
To continue to drive efficiencies in datacenter consolidation and address ongoing
consolidation, IT organizations should carefully assess the total cost of implementing
memory-rich systems with high VM densities, as well as scalable workloads, against
the moderate virtualization goals they have today. IDC believes that without a change
in IT practices and policies, the cost of computing will continue to rise as virtualization
becomes saturated at more modest consolidation levels.

To drive up VM densities, customers should:

     Balance newer processing capabilities in systems with dense memory
     configurations. This is essential for a host of benefits: improving consolidation
     ratios, expanding the choice of physical and virtual servers for more applications,
     leveraging processor-based software licensing, enabling migration of large
     databases to a virtual environment or x86 architecture, and improving database
     performance with more memory addressability and memory sharing.

     Take advantage of innovations in processing architecture with embedded
     virtualization assist technology to enable offloading and lower the overhead from
     the hypervisor.

     Implement networked storage solutions that enable mobility of virtual machines
     across physical systems and allow for optimization of applications across the
     entire datacenter while still meeting SLA requirements for availability and
     performance.

     Implement automation and workload-balancing tools to reduce the amount of
     required hardware for overhead purposes and reach a higher level of system
     utilization and lower staff maintenance costs.

     Consolidate applications with the same operating system on physical servers to
     encourage page sharing between applications. This lowers the overhead on
     system memory should capacity become low.

     Aggressively test current IT practices and policies and reevaluate if these serve
     longer-term goals for virtualization adoption and consolidation. This will likely
     require a change in current thinking and may be the most difficult change to
     make in creating a more integrated set of technologies for the future datacenter.




Copyright Notice

External Publication of IDC Information and Data — Any IDC information that is to be
used in advertising, press releases, or promotional materials requires prior written
approval from the appropriate IDC Vice President or Country Manager. A draft of the
proposed document should accompany any such request. IDC reserves the right to
deny approval of external usage for any reason.

Copyright 2010 IDC. Reproduction without written permission is completely forbidden.
                                                                                              XSW03070-USEN-00


14                                              #222224                                            ©2010 IDC

Contenu connexe

Tendances

The Datacenter Of The Future
The Datacenter Of The FutureThe Datacenter Of The Future
The Datacenter Of The FutureCTRLS
 
Closing The Virtual IO Management Gap
Closing The Virtual IO Management GapClosing The Virtual IO Management Gap
Closing The Virtual IO Management GapJohn McDonald
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Bladespankaj009
 
AMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to WorkAMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to WorkJames Price
 
Quick start guide_virtualization_uk_a4_online_2021-uk
Quick start guide_virtualization_uk_a4_online_2021-ukQuick start guide_virtualization_uk_a4_online_2021-uk
Quick start guide_virtualization_uk_a4_online_2021-ukAssespro Nacional
 
IBM Managed Hosting - server services
IBM Managed Hosting - server servicesIBM Managed Hosting - server services
IBM Managed Hosting - server serviceswebhostingguy
 
VDI Monitoring
VDI MonitoringVDI Monitoring
VDI Monitoringkrajav
 
Datacenter
DatacenterDatacenter
Datacenterjayconde
 
Cloud Computing Webinar
Cloud Computing WebinarCloud Computing Webinar
Cloud Computing WebinarSaif Ahmad
 
Hitachi Data Systems and HBF Success Story
Hitachi Data Systems and HBF Success StoryHitachi Data Systems and HBF Success Story
Hitachi Data Systems and HBF Success StoryHitachi Vantara
 
Cisco Presentation
Cisco PresentationCisco Presentation
Cisco PresentationRBratton
 
Emg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaperEmg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaperhoanv
 
TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...
TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...
TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...IBM India Smarter Computing
 

Tendances (15)

The Datacenter Of The Future
The Datacenter Of The FutureThe Datacenter Of The Future
The Datacenter Of The Future
 
Closing The Virtual IO Management Gap
Closing The Virtual IO Management GapClosing The Virtual IO Management Gap
Closing The Virtual IO Management Gap
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Blades
 
AMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to WorkAMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to Work
 
SID - First Credit
SID - First CreditSID - First Credit
SID - First Credit
 
Quick start guide_virtualization_uk_a4_online_2021-uk
Quick start guide_virtualization_uk_a4_online_2021-ukQuick start guide_virtualization_uk_a4_online_2021-uk
Quick start guide_virtualization_uk_a4_online_2021-uk
 
IBM Managed Hosting - server services
IBM Managed Hosting - server servicesIBM Managed Hosting - server services
IBM Managed Hosting - server services
 
VDI Monitoring
VDI MonitoringVDI Monitoring
VDI Monitoring
 
Datacenter
DatacenterDatacenter
Datacenter
 
zEnterprise Executive Overview
zEnterprise Executive OverviewzEnterprise Executive Overview
zEnterprise Executive Overview
 
Cloud Computing Webinar
Cloud Computing WebinarCloud Computing Webinar
Cloud Computing Webinar
 
Hitachi Data Systems and HBF Success Story
Hitachi Data Systems and HBF Success StoryHitachi Data Systems and HBF Success Story
Hitachi Data Systems and HBF Success Story
 
Cisco Presentation
Cisco PresentationCisco Presentation
Cisco Presentation
 
Emg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaperEmg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaper
 
TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...
TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...
TCA/TCO Benefits of Consolidating Databases and x86 Servers on IBM Enterprise...
 

Similaire à The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

Short Economic EssayPlease answer MINIMUM 400 word I need this.docx
Short Economic EssayPlease answer MINIMUM 400 word I need this.docxShort Economic EssayPlease answer MINIMUM 400 word I need this.docx
Short Economic EssayPlease answer MINIMUM 400 word I need this.docxbudabrooks46239
 
Idc White Paper For Ibm On Virtualization Srvcs
Idc White Paper For Ibm On Virtualization SrvcsIdc White Paper For Ibm On Virtualization Srvcs
Idc White Paper For Ibm On Virtualization Srvcslambertt
 
The Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra MolletiThe Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra MolletiHitendra Molleti
 
Virtualization 101
Virtualization 101Virtualization 101
Virtualization 101MCPc, Inc
 
Hosted Desktop and Evolution of Hardware Server Technologies-2015 Edition
Hosted Desktop and Evolution of Hardware Server Technologies-2015 EditionHosted Desktop and Evolution of Hardware Server Technologies-2015 Edition
Hosted Desktop and Evolution of Hardware Server Technologies-2015 EditionAhmed Sallam
 
Hosted desktop and evolution of hardware server technologies - 2015 edition
Hosted desktop and evolution of hardware server technologies - 2015 editionHosted desktop and evolution of hardware server technologies - 2015 edition
Hosted desktop and evolution of hardware server technologies - 2015 editionAhmed Sallam
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Bladespankaj009
 
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...webhostingguy
 
Converged White Paper 2009
Converged White Paper 2009Converged White Paper 2009
Converged White Paper 2009SusanSampsonHP
 
The Evolution of Converged Infrastructure - White Paper 2009
The Evolution of Converged Infrastructure - White Paper 2009The Evolution of Converged Infrastructure - White Paper 2009
The Evolution of Converged Infrastructure - White Paper 2009SusanSampsonHP
 
VMware: Innovate and Thrive in the Mobile-Cloud Era
VMware: Innovate and Thrive in the Mobile-Cloud EraVMware: Innovate and Thrive in the Mobile-Cloud Era
VMware: Innovate and Thrive in the Mobile-Cloud EraVMware
 
IDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private CloudIDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private CloudEMC
 
Efficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and StrategiesEfficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and StrategiesCMI, Inc.
 
Presidio Data Center Practice Overview
Presidio Data Center Practice OverviewPresidio Data Center Practice Overview
Presidio Data Center Practice Overviewjdinneen
 
Business Case Of Desktop Virtualization
Business Case Of Desktop Virtualization Business Case Of Desktop Virtualization
Business Case Of Desktop Virtualization Md Yousup Faruqu
 
Optimize your virtualization_efforts_with_a_blade_infrastructure
Optimize your virtualization_efforts_with_a_blade_infrastructureOptimize your virtualization_efforts_with_a_blade_infrastructure
Optimize your virtualization_efforts_with_a_blade_infrastructureMartín Ríos
 
Virtualization 2.0: The Next Generation of Virtualization
Virtualization 2.0: The Next Generation of VirtualizationVirtualization 2.0: The Next Generation of Virtualization
Virtualization 2.0: The Next Generation of VirtualizationEMC
 
IDC Tech Spotlight: From Silicon To Cloud
IDC Tech Spotlight: From Silicon To CloudIDC Tech Spotlight: From Silicon To Cloud
IDC Tech Spotlight: From Silicon To CloudJames Price
 
Cloud Computing for Banking - Accenture
Cloud Computing for Banking - AccentureCloud Computing for Banking - Accenture
Cloud Computing for Banking - AccentureKim Jensen
 

Similaire à The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family (20)

Short Economic EssayPlease answer MINIMUM 400 word I need this.docx
Short Economic EssayPlease answer MINIMUM 400 word I need this.docxShort Economic EssayPlease answer MINIMUM 400 word I need this.docx
Short Economic EssayPlease answer MINIMUM 400 word I need this.docx
 
Idc White Paper For Ibm On Virtualization Srvcs
Idc White Paper For Ibm On Virtualization SrvcsIdc White Paper For Ibm On Virtualization Srvcs
Idc White Paper For Ibm On Virtualization Srvcs
 
The Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra MolletiThe Evolution Of Server Virtualization By Hitendra Molleti
The Evolution Of Server Virtualization By Hitendra Molleti
 
Virtualization 101
Virtualization 101Virtualization 101
Virtualization 101
 
A Complete Guide to Select your Virtual Data Center
A Complete Guide to Select your Virtual Data CenterA Complete Guide to Select your Virtual Data Center
A Complete Guide to Select your Virtual Data Center
 
Hosted Desktop and Evolution of Hardware Server Technologies-2015 Edition
Hosted Desktop and Evolution of Hardware Server Technologies-2015 EditionHosted Desktop and Evolution of Hardware Server Technologies-2015 Edition
Hosted Desktop and Evolution of Hardware Server Technologies-2015 Edition
 
Hosted desktop and evolution of hardware server technologies - 2015 edition
Hosted desktop and evolution of hardware server technologies - 2015 editionHosted desktop and evolution of hardware server technologies - 2015 edition
Hosted desktop and evolution of hardware server technologies - 2015 edition
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Blades
 
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
 
Converged White Paper 2009
Converged White Paper 2009Converged White Paper 2009
Converged White Paper 2009
 
The Evolution of Converged Infrastructure - White Paper 2009
The Evolution of Converged Infrastructure - White Paper 2009The Evolution of Converged Infrastructure - White Paper 2009
The Evolution of Converged Infrastructure - White Paper 2009
 
VMware: Innovate and Thrive in the Mobile-Cloud Era
VMware: Innovate and Thrive in the Mobile-Cloud EraVMware: Innovate and Thrive in the Mobile-Cloud Era
VMware: Innovate and Thrive in the Mobile-Cloud Era
 
IDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private CloudIDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private Cloud
 
Efficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and StrategiesEfficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and Strategies
 
Presidio Data Center Practice Overview
Presidio Data Center Practice OverviewPresidio Data Center Practice Overview
Presidio Data Center Practice Overview
 
Business Case Of Desktop Virtualization
Business Case Of Desktop Virtualization Business Case Of Desktop Virtualization
Business Case Of Desktop Virtualization
 
Optimize your virtualization_efforts_with_a_blade_infrastructure
Optimize your virtualization_efforts_with_a_blade_infrastructureOptimize your virtualization_efforts_with_a_blade_infrastructure
Optimize your virtualization_efforts_with_a_blade_infrastructure
 
Virtualization 2.0: The Next Generation of Virtualization
Virtualization 2.0: The Next Generation of VirtualizationVirtualization 2.0: The Next Generation of Virtualization
Virtualization 2.0: The Next Generation of Virtualization
 
IDC Tech Spotlight: From Silicon To Cloud
IDC Tech Spotlight: From Silicon To CloudIDC Tech Spotlight: From Silicon To Cloud
IDC Tech Spotlight: From Silicon To Cloud
 
Cloud Computing for Banking - Accenture
Cloud Computing for Banking - AccentureCloud Computing for Banking - Accenture
Cloud Computing for Banking - Accenture
 

Plus de IBM India Smarter Computing

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments IBM India Smarter Computing
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

Plus de IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Dernier

Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Will Schroeder
 
Things you didn't know you can use in your Salesforce
Things you didn't know you can use in your SalesforceThings you didn't know you can use in your Salesforce
Things you didn't know you can use in your SalesforceMartin Humpolec
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAshyamraj55
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdfPedro Manuel
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
RAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AIRAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AIUdaiappa Ramachandran
 
PicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer ServicePicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer ServiceRenan Moreira de Oliveira
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1DianaGray10
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintMahmoud Rabie
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaborationbruanjhuli
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesMd Hossain Ali
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 
GenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation IncGenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation IncObject Automation
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 

Dernier (20)

Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
 
Things you didn't know you can use in your Salesforce
Things you didn't know you can use in your SalesforceThings you didn't know you can use in your Salesforce
Things you didn't know you can use in your Salesforce
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdf
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
RAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AIRAG Patterns and Vector Search in Generative AI
RAG Patterns and Vector Search in Generative AI
 
PicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer ServicePicPay - GenAI Finance Assistant - ChatGPT for Customer Service
PicPay - GenAI Finance Assistant - ChatGPT for Customer Service
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership Blueprint
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 
GenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation IncGenAI and AI GCC State of AI_Object Automation Inc
GenAI and AI GCC State of AI_Object Automation Inc
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 

The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

  • 1. WHITE P APER The Value of Memory-Dense Servers: IBM's System x MAX5 for Its eX5 Server Family Sponsored by: IBM Michelle Bailey March 2010 IDC OPINION www.idc.com The technology industry has reached a crossroads. After more than a decade of physical server sprawl, nearly exponential growth in storage, and a proliferation of network technologies, IT organizations are now facing tremendous challenges in planning for a future enterprise architecture that is less expensive, less complex, and F.508.935.4015 more agile than today's infrastructure. At the core of this reinvention is virtualization and, increasingly, a converged set of IT infrastructure that is built on a service-centric approach to supporting the business. This new technology cycle is squarely aimed at improving utilization rates, driving efficiency across the datacenter, and simplifying deployment and ongoing maintenance in order to ultimately shorten time to market P.508.872.8200 and optimize the business value from IT investments. Many IT organizations are well on their way to creating a more flexible and responsive enterprise architecture. Server virtualization has quickly become mainstream and is the foundational platform for the datacenter. More than 50% of all Global Headquarters: 5 Speen Street Framingham, MA 01701 USA server workloads are now deployed on virtual machines, and this is driving a sea change in the types of technologies that IT organizations are procuring and configuring and their approach to IT processes and practices. We have already seen customers move toward more richly configured servers to maximize the number of virtual machines (VMs) consolidated per physical server. The correct balance of processor, memory, and I/O is critical in architecting an effective virtualization solution. Initially, the emphasis on building physical systems for virtual machines focused on multicore processors. However, with the maturity in virtualization, most IT organizations now report that the single greatest limiter in driving higher VM densities is tied to the amount of memory that their virtual machines can access. Servers that were previously built to support single applications have become inadequate in meeting the virtualization goals of customers. Prior to virtualization, only the most demanding workloads required high memory footprints — large databases, OLTP applications, and enterprise ERP and CRM solutions. Today, because each virtual machine requires its own memory to ensure consistent application performance, systems with large memory capabilities become essential. As a result, new x86-based servers are coming to market that can massively expand memory capacities.
  • 2. With this change in technology comes a new set of metrics for measuring ongoing success in virtualization. "Cost per application" or "cost per VM" is now used to gauge the effectiveness of technology investments, and as a consequence, customers are looking to match their consolidation goals with newer systems infrastructure that helps maximize VM densities relative to physical hardware. SITUATION OVERVIEW A New Approach to Datacenter Economics Is Required For many years, IT organizations would install at least one physical server per application, and often three to five servers per application, when taking into account test/development, staging, and disaster recovery environments. This inevitably led to an explosion in the number of physical systems and devices installed as well as datacenter sites. Prior to virtualization, most IT organizations faced: Physical server sprawl. The number of installed physical servers has increased sixfold from just over 5 million in 1996 to more than 30 million in 2010. Overprovisioning and underutilized assets. Most applications consume a fraction of a standalone server's total capacity, averaging 5–10% CPU utilization of a typical x86 server. Spiraling operational costs. Most customers have underinvested in systems management and automation tools relative to the investments that have been made in x86 systems infrastructure. This has meant that many datacenters employ manually intensive processes, resulting in greater burdens on staff. Server sprawl that exacerbates the power and cooling challenges of aging datacenter facilities. The average age of a datacenter in the United States is 12 years. This means that the typical datacenter was built to support a substantially different set of infrastructure that has become increasingly dense over time. Most datacenters were designed to support 1–2kW per rack versus 8–15kW per rack that we routinely observe. Virtualization Is the Killer App for the Datacenter Virtualization technologies have completely transformed the way in which customers build, deploy, and manage their systems infrastructure. Virtualization tools allow multiple logical servers or "virtual machines" to run on a single physical server. By consolidating applications onto fewer physical servers, customers have been able to slow the sprawl of physical servers within their datacenters. In fact, today most datacenters report that virtualization has become the default build for new server installations (see Figure 1). 2 #222224 ©2010 IDC
  • 3. Customers have realized three primary benefits in deploying virtualization technologies: Physical server consolidation. Consolidation remains the main driver for deploying virtualization today. By consolidating multiple virtual machines on a single physical server, customers have less server hardware to purchase and fewer installed servers. The most direct benefits are server hardware savings and, consequently, fewer hardware maintenance agreements. Other benefits include reduced energy demands for the datacenter and lower requirements for floor space and rack space. This consolidation helps in reducing staff burdens for purchasing, deployment, and hardware maintenance; however, customers have yet to see any significant benefit from application and OS management. Improved availability and disaster recovery. Mobility tools enable the migration of a virtual machine from one piece of physical server hardware to another. Customers have found these technologies particularly useful for reducing planned downtime and alleviating the pressure on shrinking maintenance windows. Mobility tools are also used to combat unplanned downtime and can be used alone or in conjunction with existing tools such as clustering and replication. Over time, we expect that customers will be able to regularly move virtual machines not just across the datacenter floor but also from one site to another, creating a new paradigm for disaster recovery. Improved flexibility. Virtualization has allowed customers to be more responsive to the business. Virtual server deployments can literally reduce the time to deploy a server to minutes compared with days or even weeks for physical server deployments, meaning that time to market is significantly reduced. Virtualization also decouples the server hardware from the application so that maintaining legacy applications is greatly simplified. ©2010 IDC #222224 3
  • 4. FIGURE 1 Server Virtualization Adoption Q. Which of the following statements most closely describes the build decision for new server hardware at your organization? Virtualization is the default build for new server hardware unless a case can be made for a standalone, unvirtualized server Standalone servers are the default build, but we strongly advise or incent our application owners to use virtualization where possible Standalone servers are the default build, and we will suggest virtualization with application owners but will not push it Standalone servers are the default build, and we will deploy virtualization only if our customers request it 0 10 20 30 40 50 (% of respondents) n = 400 Source: IDC's Server Virtualization Multiclient Study, 2009 The Impacts of Mainstream Server Virtualization Adoption Given the broad adoption of virtualization, the physical server market has changed substantially and the number of installed servers worldwide is leveling off. However, at the same time, the number of virtual machines is exploding. This "virtual server sprawl" is already having a profound impact on IT operations and procurement strategies. Virtual Machine Sprawl a Rising Datacenter Cost IDC expects that more than 50 million virtual servers and just 30 million physical systems will be installed by 2013, resulting in more than 80 million logical machines (see Figure 2). 4 #222224 ©2010 IDC
  • 5. FIGURE 2 New Economic Model for the Datacenter Shifts to Automation Tools Are a Requirement Source: IDC, 2009 Virtual Machine Densities on the Rise The rapid growth in the number of virtual machines is due not just to the growing proportion of servers being virtualized but also to the growing number of virtual machines installed per physical server. After years of building in overhead on hardware resources to help guarantee service- level agreements (SLAs), most customers had modest goals for increasing the utilization of their servers. Many report an ideal of moving from 5% or 10% utilization for standalone servers to 30% or 40% utilization for virtual servers. This has meant that on average, the number of VMs per server has been approximately 6 to 1. Figure 3 demonstrates the average number of VMs deployed per physical server, according to a recent survey of 400 systems administrators. While a consolidation ratio of 6 VMs per server is the average, IDC routinely sees customers standardizing on consolidation ratios of 8:1 or 10:1 and leading-edge customers deploying 25, 30, or even 40 VMs per physical server. ©2010 IDC #222224 5
  • 6. Changing Server Configurations to Optimize for Virtualization IDC finds that IT organizations with more aggressive VM density goals are deploying more richly configured systems with significantly higher memory installations (see Figure 4). To achieve this increase in memory, customers will often buy servers with higher processor counts for two reasons: 1. The higher the socket count, the greater the access to physical memory. 2. Servers with higher numbers of sockets tend to have higher numbers of DIMM slots on the motherboard. Often, we find that customers that purchase systems with high core counts for improved memory accessibility have underutilized processors. FIGURE 3 Server Virtualization Densities, 2008 20–24 VMs per 25+ VMs per physical server physical server (4.5%) (3.4%) 1 VM per physical 15–19 VMs per server (10.9%) physical server (4.5%) 10–14 VMs per physical server (10.2%) 2–4 VMs per physical server 5–9 VMs per (42.2%) physical server (24.3%) n = 400 Source: IDC's Server Virtualization Multiclient Study, 2009 6 #222224 ©2010 IDC
  • 7. FIGURE 4 Server Virtualization Densities by Memory Installed per Server 45 41.7 Average memory installed 40 35 32.3 29.5 per server (GB) 30 25 21.2 20 15 12.1 10 5 0 <4 4–5 6–9 10–19 20+ (Number of VMs per server) n = 400 Source: IDC's Server Virtualization Multiclient Study, 2009 New Hardware Solutions Are Required for Substantial Increases in VM Densities IDC research shows that customers are expecting to achieve utilization rates of 60–80% on their hardware compared with 30–40% today. This type of utilization is on par with that seen in mainframe technologies. To meet this goal, IT organizations must make substantial changes in the way they purchase and configure their server hardware. They must recognize that: Memory capacity is just as important as processor power in virtual server configurations. For the past several years, IT organizations have been taking advantage of improvements in multicore technology to drive up VM densities. Also, new hardware assist functionality built in to processors has helped reduce virtualization overhead and enabled I/O offloading. However, while processor improvements have been extremely beneficial, many customers now report that the biggest constraint to increasing VM densities lies in the ability to add memory to a system (see Figure 5). Virtualized servers have much richer configurations relative to standalone servers. IDC continues to see customers buying servers with large numbers of cores as well as large numbers of DIMM slots to support additional memory for virtualization. Typically, we see virtualized x86 servers with 28GB of RAM and a disproportionate number of 4–8 sockets compared with just 4GB RAM and 1–2 sockets on unvirtualized servers. Servers with higher processor counts provide additional memory access by default because they typically have greater numbers of DIMM slots and higher overall memory capacities. ©2010 IDC #222224 7
  • 8. Physical memory can be severely limiting to VM densities. Virtual machines must have access to enough physical memory to start the VM and run the guest operating system as well as the application. Administrators have to specify either the total amount of system memory required or the maximum, minimum, and shared memory needed, depending on their choice of virtualization technology. With higher numbers of VMs per server, memory can quickly become overcommitted. So without extended memory solutions, IT organizations have to either limit the number of VMs per server (and therefore increase the number of physical servers installed) or increase the number of installed sockets per server to increase the amount of addressable memory on a system or purchase expensive high-capacity DRAM modules. Types of applications also impact the memory requirement for virtual servers. The size of an application also has a substantial impact on the number of VMs installed per server. The number of users, the active concurrency of these users, and the memory addressability requirements of the application play a large role in determining the VM density of a virtualized server. Database and OLTP applications, for example, have both high memory and I/O requirements and are not suitable candidates for virtualization with limited memory configurations and where there is overhead from the hypervisor. Traditional Thinking Hampers VM Densities IDC's research shows that as the number of cores on a virtual server increases, so too does the memory configuration. VM densities also rise and then level off at just under 10 VMs per server on average. Today, this is primarily because servers with higher core counts are typically used to support higher-end workloads. VM densities actually start to decline with 32 or more installed sockets due to the increased use of richer applications on these multiprocessor servers. So rather than driving up VM densities on these larger boxes, many customers are applying traditional thinking to systems configuration — that is, that smaller applications run on smaller servers and large applications run on larger servers. Figure 6 displays the average amount of installed memory and the corresponding number of virtual machines based on core count. Servers with four cores in total (typically dual-socket, dual-core processor systems) average 14GB of installed RAM and support just six virtual machines. This translates into approximately one core and 2.5GB of memory per VM. In contrast, a virtualized server with 32 or more cores averages almost 45GB of total memory and just under nine virtual machines. This is almost four cores and 5GB of memory per VM. As the core count of these servers increases, so too does the prevalence of memory- intensive applications such as business processing, Oracle Database, business analytics, and collaborative applications (see Figure 7). As shown in Figure 6, VM densities for servers with high core counts level off at 8.5 VMs per server. Interestingly, customers are able to virtualize a broader set of applications as the core count of the server increases. IDC expects that without a change to memory capabilities, VM densities will stabilize on higher-end systems as customers deploy more memory-intensive applications on these servers. 8 #222224 ©2010 IDC
  • 9. FIGURE 5 Virtual Server Configuration Requirements: x86-Based Servers Only Q. Which of the following hardware components are mainly driving the richer configurations on your virtual servers? 90 80 mentioned that component is driving richer configurations) (% of respondents who 70 60 50 40 30 20 10 0 Memory Processors Storage I/O devices Other n = 400 Note: Multiple responses were allowed. Source: IDC's Server Virtualization Multiclient Study, 2009 FIGURE 6 Memory Density and VM Density by Server Core Count 50 6 45 40 5 35 Memory (GB) 4 30 25 3 20 15 2 10 1 5 0 0 4 cores 8 cores 16 cores 32+ cores Average memory (GB) Average number of VMs Average number of cores per VM Average memory per VM (GB) n = 400 Source: IDC's Server Virtualization Multiclient Study, 2009 ©2010 IDC #222224 9
  • 10. FIGURE 7 Virtual Server Workload Profile by Server Core Count n = 400 Source: IDC's Server Virtualization Multiclient Study, 2009 Automation a Key Driver to Future Success in Virtualization Most customers have invested far less in systems management and automation tools relative to the investments that have been made in hardware virtualization. Consequently, many datacenters still employ manually intensive processes to manage their virtual machines. The processes are often based on the management of their physical machines. For instance, even though most IT organizations will leverage mobility tools that enable the movement of virtual machines from one physical server to another, most of this migration is done using a combination of manual intervention and point tools, and typically these VMs are moved for the purposes of maintenance (not failover). This movement tends to happen monthly or quarterly and usually during off-hours. While the success of virtualization has largely been built on server hardware savings, the future success of an increasingly virtualized architecture is in automation. Automation provides IT organizations with the ability to link workflow practices to an "on-demand" and highly utilized infrastructure. Most importantly, automation enables IT organizations to minimize the manually intensive tasks of systems administrators and significantly lower maintenance costs that can be paralyzing to innovation. As a result, customers are building a shared pool of compute, memory, I/O, and storage upon which to support existing applications and launch new projects as well as reduce datacenter power and cooling demands. 10 #222224 ©2010 IDC
  • 11. Changing Thinking Required in the Use of Automation Tools to Drive Up VM Densities Most IT organizations are a long way from fully trusting workload-balancing tools that could automate many of these tasks. IDC expects that if customers don't significantly improve automation capabilities for their virtualized environments, IT management costs will actually rise over the next five years as systems administrators struggle to maintain a growing installed base of virtual servers that need to be patched, upgraded, and secured as any physical server (see Figure 8). Without implementing automated workload-balancing techniques, customers will have to continue to build in systems overhead, which impacts the ability to more fully utilize system resources. Application availability and performance will be at risk as bottlenecks will likely ensue on a system that is maximized without the ability to seamlessly move in resources on demand. As customers begin to build a new automation platform for their virtual environments, memory-rich systems can bridge the movement to automation by providing the appropriate headroom to successfully drive up VM densities. FIGURE 8 New Economic Model for the Datacenter Management Costs Shift to Virtual Servers Source: IDC, 2009 ©2010 IDC #222224 11
  • 12. IBM's Memory Extension Solution for Virtualization and Databases In response to customer requirements for higher memory footprints in virtualized servers and for high-end databases, IBM has released its eX5 server line with its MAX5 memory technology that can provide up to double the amount of physical memory available per server relative to industry standards. The eX5 server line is the fifth generation in IBM's Enterprise X-Architecture. IBM has been innovating around Intel-based solutions since 2000 to create a more scalable x86-based architecture to balance processing, memory, and I/O for higher-end workloads. MAX5 is utilized across IBM's newly released eX5 servers in 2-socket, 4-socket, and 8-socket configurations for a maximum of 1TB, 1.5TB, and 3.0TB of total memory in each of the respective systems with 16GB DRAM modules. These large memory capacities are made possible by attaching the IBM System x MAX5 memory expansion drawer, thereby increasing the number of available DIMM slots. The MAX5 memory expansion drawer provides 32 additional DIMM slots for each eX5 rack server. Thus, a 2-socket server can be expanded to 64 DIMM slots, a 4-socket server can be expanded to 96 DIMM slots, and each of the server chassis in an 8-socket server can be expanded to 192 DIMM slots. The Advantages of Memory-Dense Servers IT organizations have been able to achieve substantial consolidation objectives with virtualization to date, but in order for IT to continue to drive down costs in the datacenter, additional improvements are needed within hardware solutions to drive up VM densities. If customers are to consider more than 20 VMs per server, they will need to procure servers with very high memory capabilities. Given that a proportional increase in processor counts is not required, IDC believes that organizations will increasingly look to a new set of server infrastructure that scales memory capacity while optimizing for processor counts. There are multiple benefits to this type of "memory-rich" system: Scale virtual server environments without installing new physical servers. By procuring servers with higher memory capabilities, IT organizations can choose to grow their installed base of virtual servers as their requirements increase without adding another physical server. Customers can scale their server environment by installing additional memory modules rather than installing a new server. This approach saves on not only hardware, real estate, and power and cooling but also time to order, builds, and deployment of a new piece of hardware. Choose DIMM counts, DRAM modules, and overall memory costs. By selecting servers with high numbers of DIMM slots, customers can choose to fill these DIMM slots with lower-cost 2GB and 4GB memory DRAM modules or maximize the available memory access with more expensive 8GB or 16GB DRAM modules. Customers can also decide if they want to fill up the DIMM slots with less expensive memory or use fewer, more expensive DRAM modules and allow for future expansion with free DIMM slots. 12 #222224 ©2010 IDC
  • 13. Improve application choice for physical and virtual servers. Memory-rich servers can be used not only for delivering high numbers of virtual machines per server but also for hosting higher-end 64-bit workloads such as large databases and OLTP, ERP, or CRM solutions that are memory and/or I/O intensive and are sensitive to the overhead of virtualization. This type of architecture also makes virtualization of these higher-end workloads more realistic. While customers may choose to install fewer, larger VMs on these servers, they can still reap the additional benefits of virtualization, mainly higher availability and improved flexibility from mobility and deployment tools. Better leverage processor-based software pricing. For customers that have applications priced by socket or core, implementing memory-rich systems without an increase in socket or core count means that IT organizations can take advantage of existing software pricing and improve consolidation rates without an increase in software costs. Aid in migrating large databases to a virtual environment or x86 architecture. With massively scalable memory architectures, x86 customers will have greater choice in where to run their large databases. Prior to these innovations, customers would typically deploy large databases on richly configured standalone systems. Memory capacities in excess of 1TB provide customers with significantly more options for migrating these databases from existing platforms. Memory-rich systems also open up the possibility of virtualizing these databases so that customers can exploit the advantages of mobility and rapid deployment that come with virtualization. Improve database performance by providing more memory addressability and memory sharing. IT organizations could choose to use memory-rich systems for the purposes of improving the performance of large databases on x86 platforms. Enhanced memory addressability lowers the thrash on system performance with memory-hungry databases and improves memory sharing. CONCLUSION IDC believes that a new IT business cycle has begun. Over the next 10 years, IT organizations will be challenged to meet increasing demands from the business without innovating around technology. At the same time, the expectation is to continue to drive greater efficiencies and maximize IT budgets. As businesses become increasingly connected and interconnected to technology, the need to support an ever-growing portfolio of applications and analytics requires a smarter set of IT systems. Virtualization will be at the heart of future datacenter transformations and fundamentally requires a different set of systems that are tightly integrated and purpose built for virtualization. This new generation of servers is designed from the ground up to support virtual machines and will require large memory footprints to optimize virtual workloads and large databases. These systems bring together server, storage, and networking systems as well as automation tools that seek to reduce management complexities that have become a burden for most large IT organizations. While these systems will be more proprietary in nature, the trade-off is in simplifying deployment and maintenance. ©2010 IDC #222224 13
  • 14. To continue to drive efficiencies in datacenter consolidation and address ongoing consolidation, IT organizations should carefully assess the total cost of implementing memory-rich systems with high VM densities, as well as scalable workloads, against the moderate virtualization goals they have today. IDC believes that without a change in IT practices and policies, the cost of computing will continue to rise as virtualization becomes saturated at more modest consolidation levels. To drive up VM densities, customers should: Balance newer processing capabilities in systems with dense memory configurations. This is essential for a host of benefits: improving consolidation ratios, expanding the choice of physical and virtual servers for more applications, leveraging processor-based software licensing, enabling migration of large databases to a virtual environment or x86 architecture, and improving database performance with more memory addressability and memory sharing. Take advantage of innovations in processing architecture with embedded virtualization assist technology to enable offloading and lower the overhead from the hypervisor. Implement networked storage solutions that enable mobility of virtual machines across physical systems and allow for optimization of applications across the entire datacenter while still meeting SLA requirements for availability and performance. Implement automation and workload-balancing tools to reduce the amount of required hardware for overhead purposes and reach a higher level of system utilization and lower staff maintenance costs. Consolidate applications with the same operating system on physical servers to encourage page sharing between applications. This lowers the overhead on system memory should capacity become low. Aggressively test current IT practices and policies and reevaluate if these serve longer-term goals for virtualization adoption and consolidation. This will likely require a change in current thinking and may be the most difficult change to make in creating a more integrated set of technologies for the future datacenter. Copyright Notice External Publication of IDC Information and Data — Any IDC information that is to be used in advertising, press releases, or promotional materials requires prior written approval from the appropriate IDC Vice President or Country Manager. A draft of the proposed document should accompany any such request. IDC reserves the right to deny approval of external usage for any reason. Copyright 2010 IDC. Reproduction without written permission is completely forbidden. XSW03070-USEN-00 14 #222224 ©2010 IDC