SlideShare a Scribd company logo
1 of 33
Download to read offline
JOURNEY TO POWER INTELLIGENT
IT—BIG DATA EMPLOYED
Mohamed Sohail
Implementation Specialist, EMC
Denis Canty
Technical Test Architect, EMC
Omar Aboulfotoh
Technical Support Engineer, EMC
2014 EMC Proven Professional Knowledge Sharing 2
Table of Contents
Executive Summary ................................................................................................................... 3
Preface ...................................................................................................................................... 4
Steps to a Power Intelligent Data Center.................................................................................... 7
Power Smart Analytic Software .............................................................................................. 7
Virtualization Impact ..............................................................................................................11
Sustainable Power Cost Savings...........................................................................................12
Location and Design Considerations .....................................................................................14
Air Management....................................................................................................................15
Right-Sizing and Energy Star ................................................................................................17
NSIDC “National Snow and Ice Data Center” Project: ...........................................................19
Case Study: EMC Durham Cloud Data Center..........................................................................23
Overview ...............................................................................................................................23
Why EMC decided to build Durham Data Center?.................................................................24
Design and Construction .......................................................................................................25
Durham Data Center Architecture..........................................................................................26
Migration Process..................................................................................................................27
Awards ..................................................................................................................................28
Case Study: “Iceland”................................................................................................................29
Appendix...................................................................................................................................30
Biographies...............................................................................................................................32
Disclaimer: The views, processes, or methodologies published in this article are those of the
authors. They do not necessarily reflect EMC Corporation’s views, processes, or
methodologies.
2014 EMC Proven Professional Knowledge Sharing 3
Executive Summary
Being environmentally sustainable is a big challenge with this global recession and stretch
economy not just for cash saving, but also for saving the globe. No one can ever ignore the
tremendous growth of the exchanged data through the internet and the cloud, and not too much
people are aware about what is happening in the backend of this process or what is behind a
datacenter.
Sustainable data centers ensure that every watt IT consumes delivers the greatest business
benefit. Today’s data centers have become increasingly large and consume a large amount of
energy, leading to a greater amount of carbon emissions. It has become critical for data center
architects to consider best practices when building a new data center. Many organizations such
as ASHARE “American Society of Heating, Refrigerating, and Air-Conditioning Engineers” and
LEED “Leadership in Energy and Environmental Design” organizations have set rules to be
followed to be compliant and sustainable.
This Knowledge Sharing article concentrates on best practices where “Cloud meets Big Data” in
our journey to optimize power efficiency. We highlight a winning idea in the EMC annual
innovation showcase 2012; “EMC Power Smart: Energy Usage Big Data Model for EMC
Products Worldwide” “A Smart Energy Monitoring and Reporting System”. We also illustrate
best practices into the modern and “sustainability compliant” data centers which will help us in
our journey towards green IT. The article will also introduce the importance of air management,
free cooling, and the virtualization impact from the inside steps prospective. Additionally, we
discuss data center location and design considerations. Lastly, we introduce some role models
who applied innovative technologies to achieve fantastic results like the Snow and Ice data
center and EMC Durham Center of Excellence data center.
2014 EMC Proven Professional Knowledge Sharing 4
Preface
One of the modern challenges these days is saving our earth and being sustainable. A very
important factor that affects the decisions of the IT managers is the power consumption and the
emission of carbon. A power intelligent data center is defined as one in which the mechanical,
lighting, electrical, and computer systems are designed for maximum energy efficiency and
minimum environmental impact.
In the graph below are some of the reasons why we should be sustainable:
Figure 1-1: Reasons for adopting sustainable solutions
While many IT shops operate low-profile data centers safe from public scrutiny. For many
industries, data centers are one of the largest sources of greenhouse gas emissions.
Companies may be required to report data center carbon emissions in the near future.
The majority of data centers, both old and new, continue to waste energy at a prodigious rate.
There are many, including the big players, who have made gigantic strides for both themselves
and the industry in developing energy efficient designs and operations that would have been
2014 EMC Proven Professional Knowledge Sharing 5
considered impossible just a few years ago, but they are still the exception, although not quite
so rare of an exception as they once were.
“People are almost universally unaware of what it really costs to transmit, store and
retrieve the incalculable amount of data they create when they widely distribute and
replicate all the stuff we all now do on a daily basis.”
Robert McFarlane - Data center power and cooling expert
The power-waste problem has three major causes. One is running processors 24/7/365 that do
little to nothing. However, most servers have the ability to run in energy-saving mode, indicating
that the manufacturing industry has tried to take steps to reduce the problem. Still, IT managers
are afraid to invoke them out of fear that something won't respond as quickly as expected if it
takes a couple of extra nanoseconds to reach full power. However, there are two bigger reasons
that are, frankly, unforgivable:
1. Most companies are willing to consider energy conservation only if there is no additional
capital cost. Long-term savings, both of energy and cost, may be given lip service, but in
the end they often become much less important. This is partly due to our being held
hostage to quarterly results, and partly because energy conservation is still driven
almost entirely by dollars rather than environmental responsibility. A high percentage of
data centers still don't even know their power consumption or costs.
2. Too many engineering firms don't really understand energy efficient data center design,
which is different than the design of other energy efficient buildings, particularly where
mission-critical operations are involved. There is no excuse today for the power and
cooling infrastructures of new data centers to remain so energy inefficient , even if their
owners are unwilling to pay for the greater efficiencies that could be achieved with
today's equipment and techniques.
Changing the perception that data transmission is "free" is a matter of public education, which
articles like this may at least start to provide to a small number of people. However, as long as
there is resistance even to using energy-efficient light bulbs in our homes, we are unlikely to
limit our indulgence in all the "fun stuff" to which we have become addicted.
2014 EMC Proven Professional Knowledge Sharing 6
Some facts about the data centers
"The industry dirty secret?" I've been writing and talking about it for more than 10 years
now, as have many other commentators. Sure, IT has been trying to keep it quiet, as
they know their organizations would not be very happy to find that 90% of the energy
going in to a data center is just wasted. Without a good plan as to how to move this
more to the below 50% level, only a brave IT director would say, "Hey, guess what? I'm
in charge of a system that is less than 10% efficient, and I don't have a clue what to do
about it."
Clive Longbottom - Service Director at Quocirca
Longbottom’s quote was made pre-virtualization and cloud. Now, virtualized systems can easily
run at greater than 50% utilization rates, and cloud systems at greater than 70%. Cooling
systems no longer have to keep a data center at less than 68 degrees Fahrenheit; data centers
can now run at close to 86 degrees Fahrenheit with little problem.
UPS systems are far more efficient now and can often replace the need for generators to fire up
when the power outage is only for a short period of time. All of these aspects mean that not only
can the IT hardware be more effective, but also the amount of energy used to support the data
center can be minimized. Consequently, a complete rework of a data center can move its PUE
from greater than 2.5 to less than 1.5—a massive shift in data center efficiency.
However, there is a downside of this: Carrying out such a rework means throwing out everything
that is already there. Few organizations would be happy to throw out several million dollars of
equipment and buy several million dollars more. Therefore, the majority of data centers are in a
state of change at the moment. Old hardware is still in service as new hardware is being
brought in, with the old equipment planned for replacement as its effective end of life is reached.
As such, over a period of five to 10 years a major shift will take place in the efficiency of private
data centers as they are reworked to adopt newer technologies. New data centers will be built to
LEED “Leadership in Energy and Environmental Design”, and ASHRAE “(American Society of
Heating, Refrigerating, and Air-Conditioning Engineers)” standards and few will have PUEs
above 1.3.
in addition, there is the increasing move towards co-location—sharing the data center facility
and making the most of economies and effectiveness of scale from UPS's and generators
involved in a massive facility—even where the equipment being installed for a single customer
2014 EMC Proven Professional Knowledge Sharing 7
is relatively small. More co-location providers are now insisting that customer racks and rows
are engineered to latest standards to optimize the overall effectiveness of the facility and the
equipment it houses.
Then there is cloud. A cloud provider must be efficient, else their business model will not work. It
is simply smart business to create a data center that has a low PUE, and where the equipment
is being run to the most optimum utilization rates.
Finally, there are the government mandates themselves. For example, the Carbon Reduction
Commitment Energy Efficiency Scheme in the United Kingdom has caused many organizations
to focus on energy efficiency to avoid large "taxes" on their carbon emissions.
Steps to a Power Intelligent Data Center
“A large data center can consume as much power as 8,200 homes. Increasing its efficiency by
30% is equivalent to taking 4,700 cars off of the road.”
Dick Glumac founder of Glumac Full-Service Consulting Engineering Company
The idea is not a complicated one. It will contribute on two wings:-
1. Working smartly with the most important elements of the data center from ‘inside’—the
storage and the servers—may be extended to the rest of the elements depending on the
nature of the equipment. Running smart software and reporting tools employing big data
analytics help reduce the power consumed by the data center components.
2. Working smartly on the data center from ‘outside’—i.e. choosing the data center
location, using alternative power sources, and how this can affect the power
consumption of the data center.
Power Smart Analytic Software
The data explosion that is evident today has shown that data is the currency of the digital age,
and data can be classed as being the new electricity. In the 19th
century, electricity turned from
being a science experiment into a necessity for human existence and advancement. The same
can be said for data today.
A major application for utilizing data is in monitoring power consumption of our systems. Our
products need to become smarter in how they are profiled in relation to energy use.
2014 EMC Proven Professional Knowledge Sharing 8
A proposition is to provide tools that measure power consumption at the device level to manage
energy the way they manage other aspects of storage. Kilowatt and kilowatt per hour are
standard energy metrics. When applied to storage, we can get an accurate way to measure the
kilowatt/terabyte. A more common metric at this point is kilowatt per rack. Due to increased
density, data centers today are pushing beyond 4 kilowatt per rack; at 6 kilowatt per rack,
they're getting into a heat danger zone.
An individual drive uses 5W to 15W of power depending on its capacity, rotation speed, form
factor, and operating state. However, "you can't just multiply the number of drives in an array by
some average power rating to get a total," says Mark Greenlaw, senior director, Storage
Marketing at EMC. Power consumption of the array is more than the sum of the power used by
the individual drives. For instance, controllers and other components also consume power. This
leads to storage density measurements of terabytes per square foot and terabytes per kilowatt.
Storage managers must also consider SAN switch power and cooling. Switches consume less
power in the data center than servers or storage mainly because there are relatively fewer
switches. Still, the power consumption of a switch is significant. "A large switch will use 1,000W
[1 kW] or more," says Ardeshir Mohammadian, senior power systems engineer at Brocade
Communications Systems. Higher port density and performance increases switch power and
cooling consumption. Thus, don't be surprised to see new “power measurements”, such as
kilowatt per port and kilowatt per gigabyte.
After getting such accurate data, we can use our backend smart software to determine the
components that consume more power and apply best practices to reduce this power, or seek
the peak power saving from the device depending on the real data acquired from the data
center components. The data can be collected by connecting it to a private cloud at the
customer’s side and then can be reported to a specialized service provider, i.e. EMC to develop
the ways to optimally run the arrays together to reach the best state of power consumption.
Analysis of the current configuration of each device reflects the customer’s power consumption
and current device configuration. The retrieved data can be analyzed further by EMC experts to
modify the design architecture, microcodes, and the installed software on these arrays to reduce
power consumption and maximize power utilization as much as possible to comply with the
global initiative of going green.
2014 EMC Proven Professional Knowledge Sharing 9
How data center managers interact with systems in regard to power consumption metrics must
evolve. Semi self-managing systems should be researched, whereby the system manages a
portion of its power consumption based on a series of readings taken from the system itself.
System power consumption depends on a hybrid mesh of interactions between software and
hardware components. Suppose that data collected were to include sensor readings from
PSU’s, c-state analysis of CPU’s, wake up cause analysis of operating systems, and power
consumption increases from software execution. Intelligent algorithms could then consider
historical performance and system/engineer interaction to adjust lower level system settings to
reduce the power consumption of the system. These algorithms could be system or cloud
hosted, but their data flow would come from a custom built cloud-based data warehouse.
By utilising the data warehouse further, it is possible to trend system performance based on
environmental, demographic, and time-based metrics to predict future outages and
inefficiencies. This would further enhance the ROI model to a potential customer.
Naturally, this would require building the historical data of the system, but over time, the
systems themselves would become more power intelligent. One area of concern is that having
machine based algorithms adjusting system settings could lead to inefficiencies and even
critical processes being closed. However, by utilising a priority system, whereby both software
processes and hardware are tagged as critical, the algorithms would not have control over
them.
The idea of having a system energy usage data warehouse also offers to the potential for EMC
to group and classify its customer’s data centers into pods, where similar customers could then
adopt the system settings that exist already to their own architecture.
Given that a large percentage of software executed on EMC systems is custom EMC software,
the concept of green software can be adopted. This would enable systems to provide feedback
to our development teams on energy use by our software applications, which lead to greener
software and increased software value for our customers.
Benefits of this approach:
 Accurately determine the power consumption of every EMC product using a Big Data
approach to collect and analyze the necessary data to reach optimal results through
predictive means or otherwise.
2014 EMC Proven Professional Knowledge Sharing 10
 Measure the efficiency of the architecture design of the devices with different
configurations. This is a challenge if we have a full configuration, as we have many
factors in measuring the consumed power.
 Reduce energy use by our products + reduce data center cooling power = cost reduction
for our customers. This will help open the door for a new generation of energy efficient
products.
 Leverage innovation in EMC’s technologies to help conserve power consumed by EMC
and non-EMC products for customers. Future customers will also have a vision about
the expected power savings and benefits using EMC’s “cloud and big data approach”
giving EMC priority when it comes to data center planning.
 The Customer will consider TCO of the products, not only the initial price.
 This will help the customers become sustainable, decrease emissions, and be compliant
with the rules which contribute to limiting global warming.
C
Power
Consumption
Metric
Data
10
01
10
01
10
0110
01
Customer
Power Smart
Customer
Report
EMC
Power Smart
Software
EMC Power Smart Cloud with Analytics
10
01
10
01
EMC Software Developer
10
01
10
01
10
01
10
01
10
0110
01 Power Smart
Developer
Report
Software
Value
Figure 1-2: EMC Power Smart Solution Illustration
For further insight, please see EMC Power Smart – Sustainability Award Winner at the 2012
Innovation Showcase – Innovator: Denis Canty MIEI BEng MENC MEngSc
2014 EMC Proven Professional Knowledge Sharing 11
Virtualization Impact
A virtualization and consolidation project is often a step in the right direction toward sustainable
computing. Research indicates that a server often only utilizes between 5 and 15% of its
capacity to service one application. With appropriate analysis and consolidation, many of these
low utilization devices can be combined into a single physical server, consuming only a fraction
of the power of the original devices and saving on costs, as well as taking a step toward a more
environmentally friendly data center environment. The basic concept of virtualization is simple:
encapsulate computing resources and run on shared physical infrastructure in such a way that
each appears to exist in its own separate physical environment. This process is accomplished
by treating storage and computing resources as an aggregate pool which networks, systems,
and applications can leverage on an as-needed basis.
Virtualization provides Data Center Consulting Services which drive business agility by
simplifying business infrastructure to create a more dynamic and flexible data center.
Virtualization is a key strategy to reduce data center power. With virtualization one physical
server hosts multiple virtual servers. Virtualization enables data centers to consolidate their
physical server infrastructure by hosting multiple virtual servers on a smaller number of more
powerful servers, using less electricity and simplifying the data center. Besides improving
hardware use, virtualization reduces data center floor space, makes better use of computing
2014 EMC Proven Professional Knowledge Sharing 12
power, and reduces the data center’s energy demands. Many enterprises are using
virtualization to curb the runaway energy consumption of data centers.
Sustainable Power Cost Savings
“A large data center can consume as much power as 8,200 homes. Increasing its
efficiency by 30% is equivalent to taking 4,700 cars off of the road.” Dick Glumac, founder
of Glumac Company.
Figure 1-3: Cost influence curve
The data center industry is on the rise again, but with a different focus. In the 90's, it was
construction speed; this time around it is operating efficiency. Data center owners have had to
learn the hard way that operating costs can quickly outweigh construction costs. Glumac has
developed an exciting new data center design process with the efficiency issue at its core. This
translates into huge savings for our clients.
Large corporations are moving away from piecemeal green activities and are adopting
broader strategies to cope with the environmental issues that affect their business. For
2014 EMC Proven Professional Knowledge Sharing 13
the IT director this means less work on isolated green "hobbies", and more joined-up
thinking with peers to create sustainable policies.
Geoff Lane, partner, sustainable business solutions at accounting and consulting firm Price
Waterhouse Coopers and former head of environmental policy at the Confederation of British
Industry, reports that more clients are seeking advice on how to benchmark their green activity.
"Companies regularly come to us and say, 'We have recycling programs or duplex printing
policies in place but we want to raise our game'."
A fresh focus for IT departments in all organizations is how to make the data centers
environmentally sustainable. Computing power per square foot may have increased with the
use of multicore chips and blades, but this concentration of hardware also generates more heat
and the problem of how to dissipate it.
According to analyst firm Gartner, most large enterprises' IT departments spend about 5% of
their total IT budgets on energy. This could rise by two to three times within the next five years.
One way to decrease power consumption is to better integrate facilities teams and engineers
with the IT department. "The typical engineer does not look past the power supply or the
gateway to the IT piece," says Patrick Fogarty, director of consultant engineering practice,
Norman Disney and Young, and a speaker at Datacenter Dynamics' Next Big Datacenter
Challenge energy summit in London in February.
Similarly, the IT team is predisposed to consume all available power to keep its applications
running, according to several delegates at the summit.
"If we could do it all over again without the legacy datacenter architecture, we would take a
more holistic view. From the CPU to the actual transmission, there are a lot of inefficiencies
leaked through cabling, plus the massive inefficiencies in the ways we cool IT," says Fogarty.
CIOs and their staff in many organizations are turning to the data center to try to plug these
leaks.
"One of the biggest issues around power consumption in IT is the datacenter," says Ben Booth,
global chief technology officer at research firm Ipsos which has data centers scattered around
the globe, ranging from vast server farms to machines stuffed into back offices. Booth is
2014 EMC Proven Professional Knowledge Sharing 14
reviewing ways to consolidate these in order to reduce the bill, reduce numbers, and provide a
round-the-clock service to customers.
Location and Design Considerations
“Site selection can mean the difference between a data center costing $3 million or $11
million a year in utility costs".
Chien Si Harriman, Energy Modeling expert
When a company buys land, it is important to consider the property’s suitability to house a
server environment. The most desirable location is one that supports the data center site
selection objectives to safeguard data server equipment and accommodate growth and change
of the data center.
The location of a data center can be as critical as the services offered there. Many companies
need access to their servers on a 24/7/365 basis.
Data center requirements have changed dramatically in the last few years. Earlier, the main
concern in choosing a location for data center revolved around network accessibility, as fiber
networks connecting data centers were concentrated in the main cities. This is still a concern.
However, the growth of high-speed network infrastructure and remote access capabilities allows
data centers to be built in any location. Nevertheless, selecting the physical location of the data
center still forms an integral part of data center strategy, keeping in mind a lot of other factors
that have a bearing on the total cost of ownership (TCO) of the organization.
Data center location continues to play a pivotal role in deciding its operational costs and
ensuring its smooth functioning. An improper data center location not only results in high
operating costs but may also lead to loss of revenue due to disruptions in data center
operations.
The other consideration is power. With power consumption on the rise, blackouts are becoming
a common problem in areas with poor power infrastructure. As data centers require enormous
amounts of power—failure of which can cause unexpected downtime—availability of abundant
low cost power is an important consideration. Companies today are looking at alternative
sources of energy such as hydroelectric power and biofuel that significantly reduce operational
costs.
2014 EMC Proven Professional Knowledge Sharing 15
Power-intelligent data centers are an obvious answer to efficient power management as
environmental consciousness has become a key corporate parameter. Besides, data centers
built in cooler climates can reduce costs because outside air can be used to keep it cool
automatically.
Air Management
Improving "air management"—optimizing the delivery of cool air and the collection of waste heat
—can involve many design and operational practices. Air cooling improvements can often be
realized by addressing:
• Short-circuiting heated air over the top or around server racks.
• Cooled air short-circuiting back to air conditioning units through openings in raised
floors such as cable openings or misplaced floor tiles with openings.
• Misplacement of raised floor air discharge tiles.
• Poor location of computer room air conditioning units.
• Inadequate ceiling height or undersized hot air returns plenum.
• Air blockages such as often happens with piping or large quantities of cabling under
raised floors.
• Openings in racks allowing air bypass (“short-circuit”) from hot areas to cold areas.
• Poor airflow through racks containing IT equipment due to restrictions in the rack
structure.
• IT equipment with side or top-air-discharge adjacent to front-to-rear discharge
configurations.
• Inappropriate under-floor pressurization - either too high or too low.
The general goal for achieving better air management should be to minimize or eliminate
inadvertent mixing between cooling air supplied to the IT equipment and collection of the hot air
rejected from the equipment. Air distribution in a well-designed system can reduce operating
costs, reduce investment in HVAC (heating, ventilation, and air conditioning) equipment, allow
2014 EMC Proven Professional Knowledge Sharing 16
for increased utilization, and improve reliability by reducing processing interruptions or
equipment degradation due to overheating.
Figure 1-4: Data Center Design and Air Flow
Solutions to common air distribution problems include:
• Use of "hot aisle and cold aisle" arrangements where racks of computers are stacked
with the cold inlet sides facing each other and similarly the hot discharge sides facing
each other. (Figure 1-3)
2014 EMC Proven Professional Knowledge Sharing 17
• Blanking unused spaces in and between equipment racks.
• Careful placement of computer room air conditioners and floor tile openings, often
through the use of computational fluid dynamics (CFD) modeling.
• Collection of heated air through high overhead plenums or ductwork to efficiently return
it to the air handler(s).
• Minimizing obstructions to proper airflow.
Right-Sizing and Energy Star
The U.S. Environmental Protection Agency (EPA) recently extended its Energy Star program to
include rack servers.
Energy Star is a way to provide forced information about products so that consumers can make
better decisions. People might want to choose high-efficiency products, but they don't know how
to compare the efficiencies among products in a standardized way. There is a lot of confusion
about trying to buy high-efficiency products in the data center. [The Energy Star program] is a
huge challenge because it's so difficult to compare two servers as to their function; all the
benchmarks for server usage are so hotly disputed between the vendors.
[The EPA] has been working on this for about 5 or 6 years. The genesis for doing this came out
of a charrette, a group of experts getting together to work on a problem, done by the Rocky
Mountain Institute. They brought people together from the data center industry to think about
this problem of energy efficiency well in advance of the current wave of interest. They wrote a
report where one of the results was to push the government to consider extending Energy Star
to servers and other data center related products. I think it's a great idea.
I think it can go further. Half of the power that comes in from the utility companies never even
makes it to the server. Rather, it's wasted in power systems and cooling systems, the kinds of
products that APC makes. We have a huge responsibility to fix that. How would that be
measured?
Energy Star has other programs besides these product level certifications where they recognize
efficient organizations. They try to recognize solutions that are exceptional in their field for
performance. It's not benchmarked against others. I think that's a good way to start when we
2014 EMC Proven Professional Knowledge Sharing 18
have a difficult problem; just select solutions that we know have done a better job. As we get
better, understand it.
I think they should start doing that for complete data center infrastructure designs. If each part is
efficient, it doesn't necessarily make the data center efficient. I can still make a bad data center
out of efficient parts. Though part-level qualifications are nice, if I take very efficient servers and
throw 18 million of them in a data center when I only needed six, I'm going to waste a lot of
power. It's not just the parts. We have to look at the complete system. Perhaps something like
what LEED (Leadership in Energy and Environmental Design) does?
I totally agree with that, because one of things that people are going to find is what will they do
with all the junk from the data center after it is retired in 12 years? Can it be recycled? LEED
thinks about those kinds of things; recyclability, disposability, etc. A lot of people aren't thinking
about that today but the day will come when they will be thinking about it for sure. While today
we can throw a lot of things away in a dumpster and get away with it, that is going to be less
likely in the future. All of the sudden it's going to be a big deal for people to think about the
environmental friendliness of the data center overall, not just energy efficiency.
If we look at the opportunity to save energy, most of the opportunity is in bad system design, not
in the parts. The key to energy efficiency in the data center is right-sizing. The number one
driver of waste in the data center—whether it's servers, power systems, cooling systems—is
over-sizing whatever it is we're looking at. And the amount of over-sizing that occurs within data
centers at all levels is rampant.
People oversize because it's so difficult to scale. Thus, people build a giant thing first and hope
that it's going to fill up. But while it's filling up, it's inefficient. Typically, a data center is expected
to survive for about 12 years and that number is increasing. If expected to last 12 years, it must
be built for a 12- year forward looking goal. That's the logic being used, but how does one know
what they're going to need in 12 years. So what do they do? They know that they've been
growing, so they over-design these things, make them huge, and start out with a small load of IT
equipment and gradually grow it over time. Very inefficient.
Storage managers can deploy storage in more energy-efficient ways. If high performance is not
needed, deploy 7,200 rpm or 10,000 rpm disks rather than 15,000 rpm models, as the slower
speeds use less energy. Similarly, smaller form-factor (2.5-inch) disk drives require only 5 volts
2014 EMC Proven Professional Knowledge Sharing 19
vs. 12 volts for standard 3.5-inch form-factor drives. Small form factors, however, usually have
smaller capacity (see "Energy tradeoffs").
Figure 1-5: Energy tradeoffs
National Snow and Ice Data Center (NSIDC) Project
The Green Data Center project, funded by the National Science Foundation (NSF) under its
Academic Research Infrastructure Program, with additional support from NASA, shows NSIDC's
commitment to reducing its impact on the environment, and that there is significant opportunity
to improve the efficiency of data centers in the U.S.
Coolerado and Modeling an Application of the Maisotsenko Cycle written by Benjamin A.
Weerts has been approved for the Department of Civil, Environmental and Architectural
Engineering University of Colorado Boulder. It included the conclusion of one of the Master's
students at Colorado Univeristy (Ben Weerts) who worked on the project.
The innovative data center redesign slashed energy consumption for data center cooling by
more than 90 percent, demonstrating how other data centers and the technology industry can
save energy and reduce carbon emissions. The heart of the design includes new cooling
technology that uses 90 percent less energy than traditional air conditioning, and an extensive
rooftop solar array that results in a total energy savings of 70 percent. The new evaporative
cooling units not only save energy, but also offer lower cost of maintenance.
2014 EMC Proven Professional Knowledge Sharing 20
This project was a successful effort by a consortium of university staff, a utility provider, and
Federal research organizations to significantly reduce the energy use of an existing data center
through a full retrofit of a traditional air-conditioning system. Cooling energy required to meet the
constant load has been reduced by over 70% for summer months and over 90% for winter
months.
The National Snow and Ice Data Center recently replaced its traditional cooling system with a
new air conditioning system that utilizes an economizer and Coolerado air conditioning units of
the Maisotsenko cooling cycle. A data logging system was installed that measured the data
center's power consumption before and after the cooling system was replaced. This data was
organized and used to prove a 90% cooling energy reduction for the NSIDC. The data logging
system also collected temperatures and humidity of inlet and outlet air of a Coolerado air
conditioner. After using these data to validate a theoretical model developed by researchers at
the National Renewable Energy Laboratory, the model was used to simulate slightly modified
heat and mass exchanger designs of the Coolerado system to improve performance. Sensitivity
analysis was performed and found a few design parameters that are important to the
thermodynamic performance of the Coolerado system, while others were proved insignificant.
Channel heights, sheet size, and ambient pressure have the most significant impact on
performance.
Overall, it was found that the current design performs reasonably well and with minor
modifications could perform optimally, as suggested by the theoretical model. The key
performance indicator used was the coefficient of performance (COP) which is defined in
Equation 18. The COP is a robust performance metric that by definition incorporates cooling
power, product air temperature, product air flow rate, and total air flow rate. If any of these
variables change, the COP will reflect that change. Note that only the product air mass flow rate
can be counted in the cooling power. Working air is very humid, almost saturated, and would be
uncomfortable in an occupied space; therefore, it is exhausted and not used for any useful
cooling. Although it's not a design variable per se, the ambient pressure, or elevation, has the
greatest impact on the performance, with over 11% increase at 2200 feet (from Boulder,
Colorado’s 5800 feet), and over 21% increase at sea level. It should be noted that the low
channel heights’ sensitivity simulation ranks between these two elevation impacts, signifying
that channel height is also a major performance driver with over 14% increase in COP
compared to the base case as both product and working air channel heights are decreased by
0.5mm (0.02"). This value was selected as a maximum change because Coolerado claims that
2014 EMC Proven Professional Knowledge Sharing 21
changing the channel heights more than this amount requires additional time to dial in the
proper flow and speed of the machine used in the manufacturing line. It is also interesting to
note that the Low Wet Wall simulation (reduced working air channel height) resulted in more
than double the change that the Low Dry Wall simulation showed. This implies that it is more
important for the working air channel height to be small and optimally sized for maximum
evaporation rate than it is for the product channel height.
Project results in terms of energy savings, financial savings, or other environmental or
resource use benefits
In the past, cooling the computer room required more than 400,000 kWh of energy/year, enough
to power 50 homes. With the new system in place, the cooling power levels in the summer of
2011 were reduced by 95%. In the winter, the eight patented indirect evaporative cooling
technology (IECT) units operate as highly efficient computer room humidifiers. Using the air
economizers enables the data center to meet or exceed all ASHRAE environmental
specifications. Electricity savings are estimated to be $54,000/year, not including the solar
array. If the owner replaced the existing 30-ton computer room air conditioning (CRAC) unit with
a similar unit, the estimated replacement cost would have been about $160,000. The
maintenance costs for the CRAC unit were about $15,000/year. The new cooling system cost
$250,000, but maintenance costs for the new system will be about $2,000/year. Payback on the
cooling system is well under three years, with costs over that time equaling $367,000 for the
CRAC compared to $256,000 for the new system. In 2011, the project was presented an award
for high-impact research by a local consortium of scientific organizations. The project team was
recognized for its innovative data center redesign that slashed data center energy consumption
for cooling by more than 90%, demonstrating how similar facilities and the technology industry
at large can save energy and reduce carbon emissions. A new web-based monitoring system
allows the public to monitor power use in real time. The monitoring system includes
temperature, humidity, airflow, and electrical power measurements that enable characterization
of the heat gain from the equipment, the heat removed by the cooling system, the energy
consumption of the cooling equipment, and the uniformity of conditions within the data center.
The solar phase of the project under construction and should be complete by April 2012.
Currently under construction, a 50kW solar array, when complete, will make the data center
effectively a zero-carbon facility.
2014 EMC Proven Professional Knowledge Sharing 22
Total reduction in carbon emissions or kW used from the original configuration
In 2009, the total power used by the data center operations was 120kW. Today, the retrofitted
data center uses about 38kW. Once the 50kW solar array is installed, the daytime carbon
footprint and energy usage should drop to zero. The old system had an annual average Power
Utilization Effectiveness (PUE) of 2.03 (PUE is the total data power divided by computer
hardware power used). From August-December 2011, the new system had an average PUE of
1.28. (The PUE would have been lower but for an inefficient UPS system.) Since October 2011,
the monthly PUE has been below 1.20 due to the airside economization of cool fall and winter
temperatures. The PUE will to continue to decrease through the spring, until outdoor
temperatures rise. The new cooling system used less than 2.5 kilowatts of power on average for
the month of October 2011. The average cooling power usage is almost 95% less than that
used in October 2010 (during which the IT load was slightly higher and outdoor conditions were
very similar). The average cooling power during the upcoming winter months of November
2011-February 2012 should be similar because the winters in the region are generally very cool.
2014 EMC Proven Professional Knowledge Sharing 23
Case Study: EMC Durham Cloud Data Center
Overview
In today’s world, the trend is to use new technologies and products that are green to reduce
power consumption.
EMC, one of the IT companies working on this initiative to help enhance storage and IT
products in going green, selected Durham, North Carolina to build its new cloud data center, a
100% Virtual Data Center. Virtualization is a technique that significantly reduces power
consumption and is widely used in all green data centers.
Top View of EMC Durham Data Center
2014 EMC Proven Professional Knowledge Sharing 24
Why EMC decided to build Durham Data Center
EMC is the latest in a series of leading technology companies that have chosen North Carolina
as major data center location, joining companies such as Google, Apple, and Facebook.
This new data center will replace the existing data center in Westborough, Massachusetts. EMC
migrated more than 6PB from the Westborough location to the new virtualized data center in
Durham. Though a formidable challenge, all application migrations were completed in less than
two years. Another huge challenge regarding Data Center Kick-Off, Site Selection, Construction,
and Application Migration was to finish everything in less than four years.
A major reason for building the Durham Cloud Data Center was the high cost of power incurred
in the Westborough Data Center.
Expense comparison between Westborough and Durham
Westborough Durham
Owner Rented EMC is the Owner
Power $.12/KWH $.05/KWH
Space 15,0002
Ft 20,0002
Ft
Physical or Virtual Legacy physical
32% Virtual
Private Cloud
100% Cloud
Providing a 46% power reduction and accommodating an increase in IT resource demand by
97%, the Durham Cloud Data Center represents a perfect example of high energy efficiency to
deliver IT as a Service.
This data center is designed to meet all requirements for enhancing power consumption by
using multiple sustainable design innovation ideas, such as:
 Achieving high rate of energy reduction over the existing facilities
 Producing exceedingly energy-efficient savings in all operations
 Using of outside air (free cooling)
2014 EMC Proven Professional Knowledge Sharing 25
o A cold-aisle containment system to manage airflow within the data center, and
flywheel UPS systems to save space and reduce reliance on batteries
 Reducing carbon emissions for both the operation and development of the facility.
 Reducing or eliminating operational disruption
Design and Construction
Moving from a physical to a highly-virtualized IT infrastructure is the key for cloud computing.
The Durham data center supports the requirements for cloud with architecture that leverages
the latest information infrastructure technologies.
Built with leading technology solutions from EMC, VMware, and VCE, the 100 percent highly
virtualized Durham data center is the foundation for EMC IT’s cloud vision and transition to
Infrastructure as a Service (IAAS). It delivers all requirements—flexibility, agility, and
scalability—to meet today’s needs and those of the future.
EMC concentrated on enhancing and developing a data center with sustainability as a primary
design criterion. Energy conservation was a primary focus as were efficiency ideas such as a
rooftop water collection system, use of outside air (free air-cooling) for approximately 57 percent
of the year, a col-aisle containment system to manage airflow within the data center, and
Flywheel Uninterrupted Power Supply system technology used to eliminate needing the storage
battery in the UPS systems.
EMC kept the exterior of the 45,000 square foot building and is using a three-phased modular,
“box within a box” approach to complete the facility 150,000 square feet at a time.
Cooling and power are installed independently with each module; Also, cooling and air-handling
are built offsite to very tight tolerance and then placed in the premises.
Cold aisle containment is a new way used in Durham to increase cooling control and efficiency.
Also, although the Durham data center will have many more servers and storage than before,
EMC IT managed to save 280 watts per server with virtualization and has reduced power
consumption per terabyte of storage by 82 percent.
2014 EMC Proven Professional Knowledge Sharing 26
EMC has chosen VCE vBlock®
which integrates VMware software, EMC products, and Cisco
UCS blade server and network technologies for Unified VM provisioning and simplified workload
management.
Durham is the first EMC data center to host all application and platforms on Top Layer “VMware
Vphere” as its virtual data center. EMC IT will be able to modify, move, or relocate workloads
easily and transparently anywhere within it for maximum performance, utilization, and efficiency.
Features
 Latest EMC and VMware tools will be used to manage the virtual infrastructure of the
data center. An energy-efficient HVAC cooling system combines water cooling and air
cooling.
 Automatic lighting control systems are implemented enhance energy efficiency.
 There are two separate underground pathways to deliver voice and data services; each
terminates at one of two main communication rooms for further redundancy.
Durham Data Center Architecture
It is fully virtualized Data center on VMware vSphere, built using the VCE VBlock converged
Infrastructure. This Virtual Infrastructure provides the cloud foundation that will enable
workloads to be located and easily moved transparently within the data center for maximum
efficiency, performance and utilization.
2014 EMC Proven Professional Knowledge Sharing 27
The data center includes.
 Network: 4 Redundant 10 GB/s WAN links, Cisco Nexus 7000 core network
 Compute: Cisco UCS x86 servers, 100% virtualized
 Storage: EMC Symmetrix®
VMAX®
, EMC VNX®
(VBlock)
 Backup: NetWorker®
, Data Domain®
and Avamar®
As seen in the components listed above, EMC uses many backup solutions as backups are
created locally in the data center, then replicated to the remote data center to provide high data
protection.
Migration Process
The Durham migration needed to be a showcase for EMC’s private cloud vision. It was a big
challenge to migrate all of the data without doing a physical move.
Traditionally, data centers are moved the same way you move to a new house. You have to
carefully pack and load everything into a lorry (or truck), drive to your new location, and unpack
and set up everything again in the new location. This operation is a major disruption. Today,
since business runs 24/7, moving the production data center should be non-disruptive.
To minimize risk and disruption to the business, EMC decided to migrate and transform a new
private cloud, built on VCE vBlock and virtual machines on VMware vSphere. Hardware was not
relocated to the new location as part of the data center migration process.
All data and applications were moved across four redundant 10 Gb/s WAN links. Many
application physical server architectures were rebuilt to new virtualized cloud architectures.
Migrating applications and data in a virtual environment minimizes downtime. The migration
process is shown in the following diagram.
2014 EMC Proven Professional Knowledge Sharing 28
Awards
EMC’s Durham facility is the latest Global Center of Excellence (COE). Others are located in
India, China, Egypt, Israel, Ireland, and Russia. COE performs services for EMC business units
including Technical Support, Customer Service, Research and Development, Engineering, and
IT.
In March 2013; EMC COE in Durham was awarded LEED (Leadership in Energy and
Environmental Design) GOLD. Highlights of the certificate included recognition of a 34% overall
energy saving and reduced carbon footprint of nearly 100,000,000 pounds of CO2.
The Certificate also included:
 Utilize “Free Cooling” 57 percent of the year
 Reduced potable water use of 78 percent
2014 EMC Proven Professional Knowledge Sharing 29
Case Study: “Iceland”
Iceland wants to be your haven
Imagine having your backup data center powered by hydroelectric and geothermal energy--
water and hot springs. Sounds like something Jules Verne would have written about if he had
heard about data centers, right?
It would be fitting for Verne to then talk about the Invest in Iceland Agency, because Iceland is
where his Journey to the Center of the Earth novel began. The agency, which is part of the
Icelandic government, will soon start trying to lure companies from Europe and the U.S. to build
their backup data centers in Iceland. The attractions are stable temperatures, a stable economy,
and yes, cheap power, thanks to the country's hot springs and waterfalls.
"The price can vary highly depending on what kind of quantity and stability in electricity you
need," said Thortur Hilmarsson, general manager of the agency. "As an example, a heavy
industry with 10-to-15 megawatts can have prices of 3.5 cents per kilowatt hour. If you're a
smaller consumer, then you're looking at 6, 7, 8 cents per kilowatt hour."
About 72% of Iceland's energy consumption comes from hydroelectric and geothermal energy.
The only fossil fuels used are for cars and fishing vessels. Temperatures are cold but stable,
between 32 degrees and 55 degrees Fahrenheit throughout the year. Plus, the corporate tax
rate in Iceland is about 18%, less than half of the 39% in the U.S., according to figures from the
Congressional Budget Office.
Hilmarsson said the agency is working on a comprehensive study about just this topic. The
study will look at latency issues from Iceland to Europe and Iceland to the U.S. That said, it
would still have to convince data centers that it is worth the extra effort to locate there.
2014 EMC Proven Professional Knowledge Sharing 30
Appendix
Robert McFarlane is a principal in charge of data center design for the international consulting
firm Shen Milsom and Wilke LLC. McFarlane has spent more than 35 years in communications
consulting, has experience in every segment of the data center industry and was a pioneer in
developing the field of building cable design. McFarlane also teaches the data center facilities
course in the Marist College Institute for Data Center Professional program, is a data center
power and cooling expert, is widely published, speaks at many industry seminars and is a
corresponding member of ASHRAE TC9.9, which publishes a wide range of industry guidelines
Clive Longbottom is the co-founder and service director at Quocirca and has been an ITC
industry analyst for more than 15 years. Trained as a chemical engineer, he worked on anti-
cancer drugs, car catalysts and fuel cells before moving in to IT. He has worked on many office
automation projects, as well as Control of Substances Hazardous to Health, document
management and knowledge management projects.
Snow and Ice Datacenter http://nsidc.org/about/green-data-center/
Snow and Ice data center, Weerts, B., D. Gallaher, R. Weaver, O. Van Geet, 2012: Green Data
Center: Energy Reduction Strategies: Airside Economization and Unique Indirect Evaporative
Cooling. IEEE Green Technologies Conference Proceedings, Tulsa
Uptime Institute http://uptimeinstitute.com/
Uptime Institute, an independent division of The 451 Group, provides education, publications,
consulting, certifications, conferences and seminars, independent research, and thought
leadership for the enterprise data center industry and for data center professionals.
Data center selection by John Rath www.Datacenterlinks.com
U.S. Environmental Protection Agency http://www.epa.gov/greenpower/index.htm
Site Selection Online http://www.siteselection.com/issues/2002/mar/p118/
Glumac http://www.glumac.com
2014 EMC Proven Professional Knowledge Sharing 31
Leveraging Renewable Energy in Data Centers
Present and Future
Keynote Summary
Ricardo Bianchini
Department of Computer Science
Rutgers University
ricardob@cs.rutgers.edu
Air Management and capitalize on free cooling.
Best Practices for Data Centers: Lessons Learned from Benchmarking 22
Data Centers
Steve Greenberg, Evan Mills, and Bill Tschudi, Lawrence Berkeley National Laboratory
Peter Rumsey, Rumsey Engineers
Bruce Myatt, EYP Mission Critical Facilities
Rocky Mountain Institute http://www.rmi.org/sitepages/pid17.php (Copyright ©1990-2012
Rocky Mountain Institute. ®
All rights reserved.
Computer World http://www.computerweekly.com/news/2240080415/Solving-storage-power-
and-cooling-concerns
Iceland example http://searchdatacenter.techtarget.com/news/1246999/Iceland-wants-your-
backup-data-centers
http://smma.com/project/mission-critical/emc-durham-cloud-data-center
http://www.datacenterknowledge.com/archives/2011/09/15/emc-opens-new-cloud-data-center-
in-nc/
EMC Durham Data center - Energy efficient design and construction
itblog@emc.com
EMC Durham Cloud DC, Powering EMC IT’s cloud vision ref.
https://www.emc.com/about/news/press/2013/20130314-01.htm
2014 EMC Proven Professional Knowledge Sharing 32
Biographies
Mohamed Sohail
Implementation Delivery Specialist
Mohamed has over 9 years of IT experience in operations, implementation, and support; 4 of
them with EMC. Mohamed previously worked as a Technical Support Engineer at Oracle Egypt
and was a Technical Trainer at Microsoft. Mohamed holds a Bsc. in Computer Science from
Rome University Sapienza Italy and a B.A in Italian from Ain Shams University Egypt.
Mohamed holds EMC Proven Professional Backup Recovery certification.
Denis Canty
Principal Test Architect
Denis is a Principal Test Architect in the Data Domain BRS division of Global External
Manufacturing (GEM). Having been in the IT industry for 9 years, Denis previously worked for
Alps Electric, a Japanese automotive electronics company. Denis earned a degree in Electronic
Engineering from Cork Institute of Technology (CIT), a Masters in Computer Science from
Dublin City University (DCU), and a Masters in Microelectronic Design from University College
Cork (UCC).
Denis holds EMC Proven Professional Information Storage and Management certification.
Omar Aboulfotoh
Technical Support Engineer
Omar is a Technical Support Engineer working in the Unified Storage Division, Global Technical
Support GTS, EMC Egypt COE. He has been in the IT industry for 3 years.
He holds a degree in Electronics and Communications Engineering and is currently studying in
a Master program at Faculty of Engineering, Cairo University.
Omar holds EMC Proven Professional Specialist certification in VNX and SAN and is also VCP
and vCloud Certified.
2014 EMC Proven Professional Knowledge Sharing 33
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION
MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.

More Related Content

What's hot

SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...1E: Software Lifecycle Automation
 
Implementing energy efficient data centers
Implementing energy efficient  data centersImplementing energy efficient  data centers
Implementing energy efficient data centersSchneider Electric India
 
IT Performance Management Handbook for CIOs
IT Performance Management Handbook for CIOsIT Performance Management Handbook for CIOs
IT Performance Management Handbook for CIOsVikram Ramesh
 
Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...
Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...
Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...marcus evans Network
 
The Nuts and Bolts of Disaster Recovery
The Nuts and Bolts of Disaster RecoveryThe Nuts and Bolts of Disaster Recovery
The Nuts and Bolts of Disaster RecoveryInnoTech
 
LeslieAldrich.TCP1.Task2
LeslieAldrich.TCP1.Task2LeslieAldrich.TCP1.Task2
LeslieAldrich.TCP1.Task2Brooke Aldrich
 
Enterprise_Architecture_and_Disaster_Recovery_Planning
Enterprise_Architecture_and_Disaster_Recovery_PlanningEnterprise_Architecture_and_Disaster_Recovery_Planning
Enterprise_Architecture_and_Disaster_Recovery_PlanningDavid Rudawitz
 
How to create a Circular IT Economy with your old IT assets
How to create a Circular IT Economy with your old IT assetsHow to create a Circular IT Economy with your old IT assets
How to create a Circular IT Economy with your old IT assetsSteven Coates
 
Green Tech Tips From Microsoft For Large Organizations
Green Tech Tips From Microsoft For Large OrganizationsGreen Tech Tips From Microsoft For Large Organizations
Green Tech Tips From Microsoft For Large OrganizationsTechSoup
 
Technology Challenges Facing Small Staff Associations
Technology Challenges Facing Small Staff AssociationsTechnology Challenges Facing Small Staff Associations
Technology Challenges Facing Small Staff AssociationsOSIbeyond
 
2021 machine-learning-in-mining-brochure
2021 machine-learning-in-mining-brochure2021 machine-learning-in-mining-brochure
2021 machine-learning-in-mining-brochurePrimitivoGonzlez1
 

What's hot (12)

SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
 
Implementing energy efficient data centers
Implementing energy efficient  data centersImplementing energy efficient  data centers
Implementing energy efficient data centers
 
IT Performance Management Handbook for CIOs
IT Performance Management Handbook for CIOsIT Performance Management Handbook for CIOs
IT Performance Management Handbook for CIOs
 
Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...
Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...
Successful Electronic Medical Record Implementation - Jonathan Levoy, Alego H...
 
The Nuts and Bolts of Disaster Recovery
The Nuts and Bolts of Disaster RecoveryThe Nuts and Bolts of Disaster Recovery
The Nuts and Bolts of Disaster Recovery
 
LeslieAldrich.TCP1.Task2
LeslieAldrich.TCP1.Task2LeslieAldrich.TCP1.Task2
LeslieAldrich.TCP1.Task2
 
Enterprise_Architecture_and_Disaster_Recovery_Planning
Enterprise_Architecture_and_Disaster_Recovery_PlanningEnterprise_Architecture_and_Disaster_Recovery_Planning
Enterprise_Architecture_and_Disaster_Recovery_Planning
 
Sustainability in IT
Sustainability in ITSustainability in IT
Sustainability in IT
 
How to create a Circular IT Economy with your old IT assets
How to create a Circular IT Economy with your old IT assetsHow to create a Circular IT Economy with your old IT assets
How to create a Circular IT Economy with your old IT assets
 
Green Tech Tips From Microsoft For Large Organizations
Green Tech Tips From Microsoft For Large OrganizationsGreen Tech Tips From Microsoft For Large Organizations
Green Tech Tips From Microsoft For Large Organizations
 
Technology Challenges Facing Small Staff Associations
Technology Challenges Facing Small Staff AssociationsTechnology Challenges Facing Small Staff Associations
Technology Challenges Facing Small Staff Associations
 
2021 machine-learning-in-mining-brochure
2021 machine-learning-in-mining-brochure2021 machine-learning-in-mining-brochure
2021 machine-learning-in-mining-brochure
 

Viewers also liked

A New Lump Sum for a New Generation
A New Lump Sum for a New GenerationA New Lump Sum for a New Generation
A New Lump Sum for a New GenerationUrbanBound
 
Артур Чеканов «Microframeworks» (Python Meetup)
Артур Чеканов  «Microframeworks» (Python Meetup)Артур Чеканов  «Microframeworks» (Python Meetup)
Артур Чеканов «Microframeworks» (Python Meetup)DataArt
 
Big data school demo
Big data school demoBig data school demo
Big data school demoDataArt
 
Иван Гришаев «Саблайм текст – ИДЕ моей мечты
Иван Гришаев «Саблайм текст – ИДЕ моей мечтыИван Гришаев «Саблайм текст – ИДЕ моей мечты
Иван Гришаев «Саблайм текст – ИДЕ моей мечтыDataArt
 
sistema de gestión de contenidos
sistema de gestión de contenidossistema de gestión de contenidos
sistema de gestión de contenidosDiego Rojas
 
Макс Волошин «Микросервисы на практике»
Макс Волошин «Микросервисы на практике»Макс Волошин «Микросервисы на практике»
Макс Волошин «Микросервисы на практике»DataArt
 
Стратегия и Кризисы
Стратегия и КризисыСтратегия и Кризисы
Стратегия и КризисыDataArt
 
Building Pennsylvania's First Detector Network Part 2
Building Pennsylvania's First Detector Network Part 2Building Pennsylvania's First Detector Network Part 2
Building Pennsylvania's First Detector Network Part 2PlantHealthResourceCenter
 
Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"
Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"
Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"DataArt
 
180 blue dining room training
180 blue dining room training180 blue dining room training
180 blue dining room trainingBill Buffalo
 
Uses and gratification theory
Uses and gratification theoryUses and gratification theory
Uses and gratification theoryAbbey Cotterill
 
Stamped Bachelor eng instrumentaion and control
Stamped Bachelor eng instrumentaion and controlStamped Bachelor eng instrumentaion and control
Stamped Bachelor eng instrumentaion and controlk0264209ali
 
Андрей Беляев "Мыслить как заказчик"
Андрей Беляев "Мыслить как заказчик"Андрей Беляев "Мыслить как заказчик"
Андрей Беляев "Мыслить как заказчик"DataArt
 
Que es el internet programacion web
Que es el internet programacion webQue es el internet programacion web
Que es el internet programacion webDiego Rojas
 
손필상 Smtnt메시징제안서
손필상 Smtnt메시징제안서손필상 Smtnt메시징제안서
손필상 Smtnt메시징제안서PhilSang Son
 
IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk
IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk
IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk DataArt
 
Contratación electrónica y contratación informática
Contratación electrónica y contratación informáticaContratación electrónica y contratación informática
Contratación electrónica y contratación informáticaJoel Quintana
 
Dc brochure vietv1 (1)
Dc brochure vietv1 (1)Dc brochure vietv1 (1)
Dc brochure vietv1 (1)Nguyet Vo
 
Леонид Шевцов «Clojure в деле»
Леонид Шевцов «Clojure в деле»Леонид Шевцов «Clojure в деле»
Леонид Шевцов «Clojure в деле»DataArt
 

Viewers also liked (20)

A New Lump Sum for a New Generation
A New Lump Sum for a New GenerationA New Lump Sum for a New Generation
A New Lump Sum for a New Generation
 
Артур Чеканов «Microframeworks» (Python Meetup)
Артур Чеканов  «Microframeworks» (Python Meetup)Артур Чеканов  «Microframeworks» (Python Meetup)
Артур Чеканов «Microframeworks» (Python Meetup)
 
Big data school demo
Big data school demoBig data school demo
Big data school demo
 
Иван Гришаев «Саблайм текст – ИДЕ моей мечты
Иван Гришаев «Саблайм текст – ИДЕ моей мечтыИван Гришаев «Саблайм текст – ИДЕ моей мечты
Иван Гришаев «Саблайм текст – ИДЕ моей мечты
 
sistema de gestión de contenidos
sistema de gestión de contenidossistema de gestión de contenidos
sistema de gestión de contenidos
 
Макс Волошин «Микросервисы на практике»
Макс Волошин «Микросервисы на практике»Макс Волошин «Микросервисы на практике»
Макс Волошин «Микросервисы на практике»
 
Стратегия и Кризисы
Стратегия и КризисыСтратегия и Кризисы
Стратегия и Кризисы
 
Building Pennsylvania's First Detector Network Part 2
Building Pennsylvania's First Detector Network Part 2Building Pennsylvania's First Detector Network Part 2
Building Pennsylvania's First Detector Network Part 2
 
Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"
Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"
Игорь Савка "Как выжить в безнадежном проекте. Личный опыт"
 
180 blue dining room training
180 blue dining room training180 blue dining room training
180 blue dining room training
 
Uses and gratification theory
Uses and gratification theoryUses and gratification theory
Uses and gratification theory
 
Stamped Bachelor eng instrumentaion and control
Stamped Bachelor eng instrumentaion and controlStamped Bachelor eng instrumentaion and control
Stamped Bachelor eng instrumentaion and control
 
Андрей Беляев "Мыслить как заказчик"
Андрей Беляев "Мыслить как заказчик"Андрей Беляев "Мыслить как заказчик"
Андрей Беляев "Мыслить как заказчик"
 
Que es el internet programacion web
Que es el internet programacion webQue es el internet programacion web
Que es el internet programacion web
 
손필상 Smtnt메시징제안서
손필상 Smtnt메시징제안서손필상 Smtnt메시징제안서
손필상 Smtnt메시징제안서
 
IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk
IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk
IT Talk smartwatches, Dmitriy Scherbina DataArt Dnepropetrovsk
 
Bean Plataspid
Bean PlataspidBean Plataspid
Bean Plataspid
 
Contratación electrónica y contratación informática
Contratación electrónica y contratación informáticaContratación electrónica y contratación informática
Contratación electrónica y contratación informática
 
Dc brochure vietv1 (1)
Dc brochure vietv1 (1)Dc brochure vietv1 (1)
Dc brochure vietv1 (1)
 
Леонид Шевцов «Clojure в деле»
Леонид Шевцов «Clojure в деле»Леонид Шевцов «Clojure в деле»
Леонид Шевцов «Clojure в деле»
 

Similar to A Journey to Power Intelligent IT - Big Data Employed

Datacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDatacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDidier Monestes
 
IBM’s Offering for a Smart, Private Cloud Sits on a Strong Foundation
IBM’s Offering for a Smart, Private Cloud  Sits on a Strong FoundationIBM’s Offering for a Smart, Private Cloud  Sits on a Strong Foundation
IBM’s Offering for a Smart, Private Cloud Sits on a Strong FoundationIBM India Smarter Computing
 
Energy efficient resource allocation007
Energy efficient resource allocation007Energy efficient resource allocation007
Energy efficient resource allocation007Divaynshu Totla
 
Lippis Energywise External Final
Lippis Energywise External FinalLippis Energywise External Final
Lippis Energywise External FinalGruene-it.org
 
Green data center_rahul ppt
Green data center_rahul pptGreen data center_rahul ppt
Green data center_rahul pptRAHUL KAUSHAL
 
Cisco Unified Computing System- sneak peak
Cisco Unified Computing System- sneak peakCisco Unified Computing System- sneak peak
Cisco Unified Computing System- sneak peakJamie Shoup
 
Grow a Greener Data Center
Grow a Greener Data CenterGrow a Greener Data Center
Grow a Greener Data CenterJamie Shoup
 
Improvements in Data Center Management
Improvements in Data Center ManagementImprovements in Data Center Management
Improvements in Data Center ManagementScottMadden, Inc.
 
UK Data Centre Capabilty Presentation Rev.A
UK Data Centre Capabilty Presentation Rev.AUK Data Centre Capabilty Presentation Rev.A
UK Data Centre Capabilty Presentation Rev.AGary Marshall
 
Improve sustainability through energy insights - Infographic
Improve sustainability through energy insights - InfographicImprove sustainability through energy insights - Infographic
Improve sustainability through energy insights - InfographicPrincipled Technologies
 
GE Total Efficiency Datacenter vFINAL
GE Total Efficiency Datacenter vFINALGE Total Efficiency Datacenter vFINAL
GE Total Efficiency Datacenter vFINALTrent Waterhouse
 
Robust Free Power management: Dell OpenManage Power Center
Robust Free Power management: Dell OpenManage Power CenterRobust Free Power management: Dell OpenManage Power Center
Robust Free Power management: Dell OpenManage Power CenterPrincipled Technologies
 
Leveraging cloud based building management systems for multi-site facilities
Leveraging cloud based building management systems for multi-site facilitiesLeveraging cloud based building management systems for multi-site facilities
Leveraging cloud based building management systems for multi-site facilitiesBassam Gomaa
 
IBM and GREEN IT; Green IT – How to Make IT Work and Save Money
IBM and GREEN IT; Green IT – How to Make IT Work and Save MoneyIBM and GREEN IT; Green IT – How to Make IT Work and Save Money
IBM and GREEN IT; Green IT – How to Make IT Work and Save MoneyIBMAsean
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSDaniel Bodenski
 

Similar to A Journey to Power Intelligent IT - Big Data Employed (20)

Datacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDatacenter ISO50001 and CoC
Datacenter ISO50001 and CoC
 
IBM’s Offering for a Smart, Private Cloud Sits on a Strong Foundation
IBM’s Offering for a Smart, Private Cloud  Sits on a Strong FoundationIBM’s Offering for a Smart, Private Cloud  Sits on a Strong Foundation
IBM’s Offering for a Smart, Private Cloud Sits on a Strong Foundation
 
Energy efficient resource allocation007
Energy efficient resource allocation007Energy efficient resource allocation007
Energy efficient resource allocation007
 
Green IT
Green IT Green IT
Green IT
 
Lippis Energywise External Final
Lippis Energywise External FinalLippis Energywise External Final
Lippis Energywise External Final
 
Green data center_rahul ppt
Green data center_rahul pptGreen data center_rahul ppt
Green data center_rahul ppt
 
Cisco Unified Computing System- sneak peak
Cisco Unified Computing System- sneak peakCisco Unified Computing System- sneak peak
Cisco Unified Computing System- sneak peak
 
Green it
Green itGreen it
Green it
 
Grow a Greener Data Center
Grow a Greener Data CenterGrow a Greener Data Center
Grow a Greener Data Center
 
Improvements in Data Center Management
Improvements in Data Center ManagementImprovements in Data Center Management
Improvements in Data Center Management
 
The next wave of GreenIT
The next wave of GreenITThe next wave of GreenIT
The next wave of GreenIT
 
UK Data Centre Capabilty Presentation Rev.A
UK Data Centre Capabilty Presentation Rev.AUK Data Centre Capabilty Presentation Rev.A
UK Data Centre Capabilty Presentation Rev.A
 
Google ppt. mis
Google ppt. misGoogle ppt. mis
Google ppt. mis
 
Improve sustainability through energy insights - Infographic
Improve sustainability through energy insights - InfographicImprove sustainability through energy insights - Infographic
Improve sustainability through energy insights - Infographic
 
GE Total Efficiency Datacenter vFINAL
GE Total Efficiency Datacenter vFINALGE Total Efficiency Datacenter vFINAL
GE Total Efficiency Datacenter vFINAL
 
Green Computing
Green  ComputingGreen  Computing
Green Computing
 
Robust Free Power management: Dell OpenManage Power Center
Robust Free Power management: Dell OpenManage Power CenterRobust Free Power management: Dell OpenManage Power Center
Robust Free Power management: Dell OpenManage Power Center
 
Leveraging cloud based building management systems for multi-site facilities
Leveraging cloud based building management systems for multi-site facilitiesLeveraging cloud based building management systems for multi-site facilities
Leveraging cloud based building management systems for multi-site facilities
 
IBM and GREEN IT; Green IT – How to Make IT Work and Save Money
IBM and GREEN IT; Green IT – How to Make IT Work and Save MoneyIBM and GREEN IT; Green IT – How to Make IT Work and Save Money
IBM and GREEN IT; Green IT – How to Make IT Work and Save Money
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
 

Recently uploaded

NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...
NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...
NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...Amil baba
 
Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作
Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作
Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作f3774p8b
 
Slide deck for the IPCC Briefing to Latvian Parliamentarians
Slide deck for the IPCC Briefing to Latvian ParliamentariansSlide deck for the IPCC Briefing to Latvian Parliamentarians
Slide deck for the IPCC Briefing to Latvian Parliamentariansipcc-media
 
9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhi
9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhi9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhi
9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhidelih Escorts
 
Thessaly master plan- WWF presentation_18.04.24.pdf
Thessaly master plan- WWF presentation_18.04.24.pdfThessaly master plan- WWF presentation_18.04.24.pdf
Thessaly master plan- WWF presentation_18.04.24.pdfTheaMan11
 
Delivering nature-based solution outcomes by addressing policy, institutiona...
Delivering nature-based solution outcomes by addressing  policy, institutiona...Delivering nature-based solution outcomes by addressing  policy, institutiona...
Delivering nature-based solution outcomes by addressing policy, institutiona...CIFOR-ICRAF
 
Science, Technology and Nation Building.pptx
Science, Technology and Nation Building.pptxScience, Technology and Nation Building.pptx
Science, Technology and Nation Building.pptxgrandmarshall132
 
Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170
Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170
Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170simranguptaxx69
 
WindEurope - Wind energy in Europe - 2023.pdf
WindEurope - Wind energy in Europe - 2023.pdfWindEurope - Wind energy in Europe - 2023.pdf
WindEurope - Wind energy in Europe - 2023.pdfShingoAramaki
 
global trend Chapter 1.presentation power point
global trend Chapter 1.presentation power pointglobal trend Chapter 1.presentation power point
global trend Chapter 1.presentation power pointyohannisyohannis54
 
5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdf
5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdf5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdf
5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdfsrivastavaakshat51
 
Hi-Fi Call Girls Valsad Ahmedabad 7397865700
Hi-Fi Call Girls Valsad Ahmedabad 7397865700Hi-Fi Call Girls Valsad Ahmedabad 7397865700
Hi-Fi Call Girls Valsad Ahmedabad 7397865700Call Girls Mumbai
 
EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...
EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...
EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...Aqsa Yasmin
 
办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一
办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一
办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一F dds
 

Recently uploaded (20)

Model Call Girl in Rajiv Chowk Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Rajiv Chowk Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Rajiv Chowk Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Rajiv Chowk Delhi reach out to us at 🔝9953056974🔝
 
Sexy Call Girls Patel Nagar New Delhi +918448380779 Call Girls Service in Del...
Sexy Call Girls Patel Nagar New Delhi +918448380779 Call Girls Service in Del...Sexy Call Girls Patel Nagar New Delhi +918448380779 Call Girls Service in Del...
Sexy Call Girls Patel Nagar New Delhi +918448380779 Call Girls Service in Del...
 
PLANTILLAS DE MEMORAMA CIENCIAS NATURALES
PLANTILLAS DE MEMORAMA CIENCIAS NATURALESPLANTILLAS DE MEMORAMA CIENCIAS NATURALES
PLANTILLAS DE MEMORAMA CIENCIAS NATURALES
 
NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...
NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...
NO1 Certified Rohani Amil In Islamabad Amil Baba in Rawalpindi Kala Jadu Amil...
 
Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作
Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作
Düsseldorf U学位证,杜塞尔多夫大学毕业证书1:1制作
 
Call Girls In R.K. Puram 9953056974 Escorts ServiCe In Delhi Ncr
Call Girls In R.K. Puram 9953056974 Escorts ServiCe In Delhi NcrCall Girls In R.K. Puram 9953056974 Escorts ServiCe In Delhi Ncr
Call Girls In R.K. Puram 9953056974 Escorts ServiCe In Delhi Ncr
 
Slide deck for the IPCC Briefing to Latvian Parliamentarians
Slide deck for the IPCC Briefing to Latvian ParliamentariansSlide deck for the IPCC Briefing to Latvian Parliamentarians
Slide deck for the IPCC Briefing to Latvian Parliamentarians
 
9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhi
9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhi9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhi
9873940964 Full Enjoy 24/7 Call Girls Near Shangri La’s Eros Hotel, New Delhi
 
Thessaly master plan- WWF presentation_18.04.24.pdf
Thessaly master plan- WWF presentation_18.04.24.pdfThessaly master plan- WWF presentation_18.04.24.pdf
Thessaly master plan- WWF presentation_18.04.24.pdf
 
young call girls in Janakpuri🔝 9953056974 🔝 escort Service
young call girls in Janakpuri🔝 9953056974 🔝 escort Serviceyoung call girls in Janakpuri🔝 9953056974 🔝 escort Service
young call girls in Janakpuri🔝 9953056974 🔝 escort Service
 
Delivering nature-based solution outcomes by addressing policy, institutiona...
Delivering nature-based solution outcomes by addressing  policy, institutiona...Delivering nature-based solution outcomes by addressing  policy, institutiona...
Delivering nature-based solution outcomes by addressing policy, institutiona...
 
Science, Technology and Nation Building.pptx
Science, Technology and Nation Building.pptxScience, Technology and Nation Building.pptx
Science, Technology and Nation Building.pptx
 
Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170
Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170
Call Girls Sarovar Portico Naraina Hotel, New Delhi 9873777170
 
WindEurope - Wind energy in Europe - 2023.pdf
WindEurope - Wind energy in Europe - 2023.pdfWindEurope - Wind energy in Europe - 2023.pdf
WindEurope - Wind energy in Europe - 2023.pdf
 
global trend Chapter 1.presentation power point
global trend Chapter 1.presentation power pointglobal trend Chapter 1.presentation power point
global trend Chapter 1.presentation power point
 
5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdf
5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdf5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdf
5 Wondrous Places You Should Visit at Least Once in Your Lifetime (1).pdf
 
FULL ENJOY Call Girls In kashmiri gate (Delhi) Call Us 9953056974
FULL ENJOY Call Girls In  kashmiri gate (Delhi) Call Us 9953056974FULL ENJOY Call Girls In  kashmiri gate (Delhi) Call Us 9953056974
FULL ENJOY Call Girls In kashmiri gate (Delhi) Call Us 9953056974
 
Hi-Fi Call Girls Valsad Ahmedabad 7397865700
Hi-Fi Call Girls Valsad Ahmedabad 7397865700Hi-Fi Call Girls Valsad Ahmedabad 7397865700
Hi-Fi Call Girls Valsad Ahmedabad 7397865700
 
EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...
EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...
EARTH DAY Slide show EARTHDAY.ORG is unwavering in our commitment to end plas...
 
办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一
办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一
办理学位证(KU证书)堪萨斯大学毕业证成绩单原版一比一
 

A Journey to Power Intelligent IT - Big Data Employed

  • 1. JOURNEY TO POWER INTELLIGENT IT—BIG DATA EMPLOYED Mohamed Sohail Implementation Specialist, EMC Denis Canty Technical Test Architect, EMC Omar Aboulfotoh Technical Support Engineer, EMC
  • 2. 2014 EMC Proven Professional Knowledge Sharing 2 Table of Contents Executive Summary ................................................................................................................... 3 Preface ...................................................................................................................................... 4 Steps to a Power Intelligent Data Center.................................................................................... 7 Power Smart Analytic Software .............................................................................................. 7 Virtualization Impact ..............................................................................................................11 Sustainable Power Cost Savings...........................................................................................12 Location and Design Considerations .....................................................................................14 Air Management....................................................................................................................15 Right-Sizing and Energy Star ................................................................................................17 NSIDC “National Snow and Ice Data Center” Project: ...........................................................19 Case Study: EMC Durham Cloud Data Center..........................................................................23 Overview ...............................................................................................................................23 Why EMC decided to build Durham Data Center?.................................................................24 Design and Construction .......................................................................................................25 Durham Data Center Architecture..........................................................................................26 Migration Process..................................................................................................................27 Awards ..................................................................................................................................28 Case Study: “Iceland”................................................................................................................29 Appendix...................................................................................................................................30 Biographies...............................................................................................................................32 Disclaimer: The views, processes, or methodologies published in this article are those of the authors. They do not necessarily reflect EMC Corporation’s views, processes, or methodologies.
  • 3. 2014 EMC Proven Professional Knowledge Sharing 3 Executive Summary Being environmentally sustainable is a big challenge with this global recession and stretch economy not just for cash saving, but also for saving the globe. No one can ever ignore the tremendous growth of the exchanged data through the internet and the cloud, and not too much people are aware about what is happening in the backend of this process or what is behind a datacenter. Sustainable data centers ensure that every watt IT consumes delivers the greatest business benefit. Today’s data centers have become increasingly large and consume a large amount of energy, leading to a greater amount of carbon emissions. It has become critical for data center architects to consider best practices when building a new data center. Many organizations such as ASHARE “American Society of Heating, Refrigerating, and Air-Conditioning Engineers” and LEED “Leadership in Energy and Environmental Design” organizations have set rules to be followed to be compliant and sustainable. This Knowledge Sharing article concentrates on best practices where “Cloud meets Big Data” in our journey to optimize power efficiency. We highlight a winning idea in the EMC annual innovation showcase 2012; “EMC Power Smart: Energy Usage Big Data Model for EMC Products Worldwide” “A Smart Energy Monitoring and Reporting System”. We also illustrate best practices into the modern and “sustainability compliant” data centers which will help us in our journey towards green IT. The article will also introduce the importance of air management, free cooling, and the virtualization impact from the inside steps prospective. Additionally, we discuss data center location and design considerations. Lastly, we introduce some role models who applied innovative technologies to achieve fantastic results like the Snow and Ice data center and EMC Durham Center of Excellence data center.
  • 4. 2014 EMC Proven Professional Knowledge Sharing 4 Preface One of the modern challenges these days is saving our earth and being sustainable. A very important factor that affects the decisions of the IT managers is the power consumption and the emission of carbon. A power intelligent data center is defined as one in which the mechanical, lighting, electrical, and computer systems are designed for maximum energy efficiency and minimum environmental impact. In the graph below are some of the reasons why we should be sustainable: Figure 1-1: Reasons for adopting sustainable solutions While many IT shops operate low-profile data centers safe from public scrutiny. For many industries, data centers are one of the largest sources of greenhouse gas emissions. Companies may be required to report data center carbon emissions in the near future. The majority of data centers, both old and new, continue to waste energy at a prodigious rate. There are many, including the big players, who have made gigantic strides for both themselves and the industry in developing energy efficient designs and operations that would have been
  • 5. 2014 EMC Proven Professional Knowledge Sharing 5 considered impossible just a few years ago, but they are still the exception, although not quite so rare of an exception as they once were. “People are almost universally unaware of what it really costs to transmit, store and retrieve the incalculable amount of data they create when they widely distribute and replicate all the stuff we all now do on a daily basis.” Robert McFarlane - Data center power and cooling expert The power-waste problem has three major causes. One is running processors 24/7/365 that do little to nothing. However, most servers have the ability to run in energy-saving mode, indicating that the manufacturing industry has tried to take steps to reduce the problem. Still, IT managers are afraid to invoke them out of fear that something won't respond as quickly as expected if it takes a couple of extra nanoseconds to reach full power. However, there are two bigger reasons that are, frankly, unforgivable: 1. Most companies are willing to consider energy conservation only if there is no additional capital cost. Long-term savings, both of energy and cost, may be given lip service, but in the end they often become much less important. This is partly due to our being held hostage to quarterly results, and partly because energy conservation is still driven almost entirely by dollars rather than environmental responsibility. A high percentage of data centers still don't even know their power consumption or costs. 2. Too many engineering firms don't really understand energy efficient data center design, which is different than the design of other energy efficient buildings, particularly where mission-critical operations are involved. There is no excuse today for the power and cooling infrastructures of new data centers to remain so energy inefficient , even if their owners are unwilling to pay for the greater efficiencies that could be achieved with today's equipment and techniques. Changing the perception that data transmission is "free" is a matter of public education, which articles like this may at least start to provide to a small number of people. However, as long as there is resistance even to using energy-efficient light bulbs in our homes, we are unlikely to limit our indulgence in all the "fun stuff" to which we have become addicted.
  • 6. 2014 EMC Proven Professional Knowledge Sharing 6 Some facts about the data centers "The industry dirty secret?" I've been writing and talking about it for more than 10 years now, as have many other commentators. Sure, IT has been trying to keep it quiet, as they know their organizations would not be very happy to find that 90% of the energy going in to a data center is just wasted. Without a good plan as to how to move this more to the below 50% level, only a brave IT director would say, "Hey, guess what? I'm in charge of a system that is less than 10% efficient, and I don't have a clue what to do about it." Clive Longbottom - Service Director at Quocirca Longbottom’s quote was made pre-virtualization and cloud. Now, virtualized systems can easily run at greater than 50% utilization rates, and cloud systems at greater than 70%. Cooling systems no longer have to keep a data center at less than 68 degrees Fahrenheit; data centers can now run at close to 86 degrees Fahrenheit with little problem. UPS systems are far more efficient now and can often replace the need for generators to fire up when the power outage is only for a short period of time. All of these aspects mean that not only can the IT hardware be more effective, but also the amount of energy used to support the data center can be minimized. Consequently, a complete rework of a data center can move its PUE from greater than 2.5 to less than 1.5—a massive shift in data center efficiency. However, there is a downside of this: Carrying out such a rework means throwing out everything that is already there. Few organizations would be happy to throw out several million dollars of equipment and buy several million dollars more. Therefore, the majority of data centers are in a state of change at the moment. Old hardware is still in service as new hardware is being brought in, with the old equipment planned for replacement as its effective end of life is reached. As such, over a period of five to 10 years a major shift will take place in the efficiency of private data centers as they are reworked to adopt newer technologies. New data centers will be built to LEED “Leadership in Energy and Environmental Design”, and ASHRAE “(American Society of Heating, Refrigerating, and Air-Conditioning Engineers)” standards and few will have PUEs above 1.3. in addition, there is the increasing move towards co-location—sharing the data center facility and making the most of economies and effectiveness of scale from UPS's and generators involved in a massive facility—even where the equipment being installed for a single customer
  • 7. 2014 EMC Proven Professional Knowledge Sharing 7 is relatively small. More co-location providers are now insisting that customer racks and rows are engineered to latest standards to optimize the overall effectiveness of the facility and the equipment it houses. Then there is cloud. A cloud provider must be efficient, else their business model will not work. It is simply smart business to create a data center that has a low PUE, and where the equipment is being run to the most optimum utilization rates. Finally, there are the government mandates themselves. For example, the Carbon Reduction Commitment Energy Efficiency Scheme in the United Kingdom has caused many organizations to focus on energy efficiency to avoid large "taxes" on their carbon emissions. Steps to a Power Intelligent Data Center “A large data center can consume as much power as 8,200 homes. Increasing its efficiency by 30% is equivalent to taking 4,700 cars off of the road.” Dick Glumac founder of Glumac Full-Service Consulting Engineering Company The idea is not a complicated one. It will contribute on two wings:- 1. Working smartly with the most important elements of the data center from ‘inside’—the storage and the servers—may be extended to the rest of the elements depending on the nature of the equipment. Running smart software and reporting tools employing big data analytics help reduce the power consumed by the data center components. 2. Working smartly on the data center from ‘outside’—i.e. choosing the data center location, using alternative power sources, and how this can affect the power consumption of the data center. Power Smart Analytic Software The data explosion that is evident today has shown that data is the currency of the digital age, and data can be classed as being the new electricity. In the 19th century, electricity turned from being a science experiment into a necessity for human existence and advancement. The same can be said for data today. A major application for utilizing data is in monitoring power consumption of our systems. Our products need to become smarter in how they are profiled in relation to energy use.
  • 8. 2014 EMC Proven Professional Knowledge Sharing 8 A proposition is to provide tools that measure power consumption at the device level to manage energy the way they manage other aspects of storage. Kilowatt and kilowatt per hour are standard energy metrics. When applied to storage, we can get an accurate way to measure the kilowatt/terabyte. A more common metric at this point is kilowatt per rack. Due to increased density, data centers today are pushing beyond 4 kilowatt per rack; at 6 kilowatt per rack, they're getting into a heat danger zone. An individual drive uses 5W to 15W of power depending on its capacity, rotation speed, form factor, and operating state. However, "you can't just multiply the number of drives in an array by some average power rating to get a total," says Mark Greenlaw, senior director, Storage Marketing at EMC. Power consumption of the array is more than the sum of the power used by the individual drives. For instance, controllers and other components also consume power. This leads to storage density measurements of terabytes per square foot and terabytes per kilowatt. Storage managers must also consider SAN switch power and cooling. Switches consume less power in the data center than servers or storage mainly because there are relatively fewer switches. Still, the power consumption of a switch is significant. "A large switch will use 1,000W [1 kW] or more," says Ardeshir Mohammadian, senior power systems engineer at Brocade Communications Systems. Higher port density and performance increases switch power and cooling consumption. Thus, don't be surprised to see new “power measurements”, such as kilowatt per port and kilowatt per gigabyte. After getting such accurate data, we can use our backend smart software to determine the components that consume more power and apply best practices to reduce this power, or seek the peak power saving from the device depending on the real data acquired from the data center components. The data can be collected by connecting it to a private cloud at the customer’s side and then can be reported to a specialized service provider, i.e. EMC to develop the ways to optimally run the arrays together to reach the best state of power consumption. Analysis of the current configuration of each device reflects the customer’s power consumption and current device configuration. The retrieved data can be analyzed further by EMC experts to modify the design architecture, microcodes, and the installed software on these arrays to reduce power consumption and maximize power utilization as much as possible to comply with the global initiative of going green.
  • 9. 2014 EMC Proven Professional Knowledge Sharing 9 How data center managers interact with systems in regard to power consumption metrics must evolve. Semi self-managing systems should be researched, whereby the system manages a portion of its power consumption based on a series of readings taken from the system itself. System power consumption depends on a hybrid mesh of interactions between software and hardware components. Suppose that data collected were to include sensor readings from PSU’s, c-state analysis of CPU’s, wake up cause analysis of operating systems, and power consumption increases from software execution. Intelligent algorithms could then consider historical performance and system/engineer interaction to adjust lower level system settings to reduce the power consumption of the system. These algorithms could be system or cloud hosted, but their data flow would come from a custom built cloud-based data warehouse. By utilising the data warehouse further, it is possible to trend system performance based on environmental, demographic, and time-based metrics to predict future outages and inefficiencies. This would further enhance the ROI model to a potential customer. Naturally, this would require building the historical data of the system, but over time, the systems themselves would become more power intelligent. One area of concern is that having machine based algorithms adjusting system settings could lead to inefficiencies and even critical processes being closed. However, by utilising a priority system, whereby both software processes and hardware are tagged as critical, the algorithms would not have control over them. The idea of having a system energy usage data warehouse also offers to the potential for EMC to group and classify its customer’s data centers into pods, where similar customers could then adopt the system settings that exist already to their own architecture. Given that a large percentage of software executed on EMC systems is custom EMC software, the concept of green software can be adopted. This would enable systems to provide feedback to our development teams on energy use by our software applications, which lead to greener software and increased software value for our customers. Benefits of this approach:  Accurately determine the power consumption of every EMC product using a Big Data approach to collect and analyze the necessary data to reach optimal results through predictive means or otherwise.
  • 10. 2014 EMC Proven Professional Knowledge Sharing 10  Measure the efficiency of the architecture design of the devices with different configurations. This is a challenge if we have a full configuration, as we have many factors in measuring the consumed power.  Reduce energy use by our products + reduce data center cooling power = cost reduction for our customers. This will help open the door for a new generation of energy efficient products.  Leverage innovation in EMC’s technologies to help conserve power consumed by EMC and non-EMC products for customers. Future customers will also have a vision about the expected power savings and benefits using EMC’s “cloud and big data approach” giving EMC priority when it comes to data center planning.  The Customer will consider TCO of the products, not only the initial price.  This will help the customers become sustainable, decrease emissions, and be compliant with the rules which contribute to limiting global warming. C Power Consumption Metric Data 10 01 10 01 10 0110 01 Customer Power Smart Customer Report EMC Power Smart Software EMC Power Smart Cloud with Analytics 10 01 10 01 EMC Software Developer 10 01 10 01 10 01 10 01 10 0110 01 Power Smart Developer Report Software Value Figure 1-2: EMC Power Smart Solution Illustration For further insight, please see EMC Power Smart – Sustainability Award Winner at the 2012 Innovation Showcase – Innovator: Denis Canty MIEI BEng MENC MEngSc
  • 11. 2014 EMC Proven Professional Knowledge Sharing 11 Virtualization Impact A virtualization and consolidation project is often a step in the right direction toward sustainable computing. Research indicates that a server often only utilizes between 5 and 15% of its capacity to service one application. With appropriate analysis and consolidation, many of these low utilization devices can be combined into a single physical server, consuming only a fraction of the power of the original devices and saving on costs, as well as taking a step toward a more environmentally friendly data center environment. The basic concept of virtualization is simple: encapsulate computing resources and run on shared physical infrastructure in such a way that each appears to exist in its own separate physical environment. This process is accomplished by treating storage and computing resources as an aggregate pool which networks, systems, and applications can leverage on an as-needed basis. Virtualization provides Data Center Consulting Services which drive business agility by simplifying business infrastructure to create a more dynamic and flexible data center. Virtualization is a key strategy to reduce data center power. With virtualization one physical server hosts multiple virtual servers. Virtualization enables data centers to consolidate their physical server infrastructure by hosting multiple virtual servers on a smaller number of more powerful servers, using less electricity and simplifying the data center. Besides improving hardware use, virtualization reduces data center floor space, makes better use of computing
  • 12. 2014 EMC Proven Professional Knowledge Sharing 12 power, and reduces the data center’s energy demands. Many enterprises are using virtualization to curb the runaway energy consumption of data centers. Sustainable Power Cost Savings “A large data center can consume as much power as 8,200 homes. Increasing its efficiency by 30% is equivalent to taking 4,700 cars off of the road.” Dick Glumac, founder of Glumac Company. Figure 1-3: Cost influence curve The data center industry is on the rise again, but with a different focus. In the 90's, it was construction speed; this time around it is operating efficiency. Data center owners have had to learn the hard way that operating costs can quickly outweigh construction costs. Glumac has developed an exciting new data center design process with the efficiency issue at its core. This translates into huge savings for our clients. Large corporations are moving away from piecemeal green activities and are adopting broader strategies to cope with the environmental issues that affect their business. For
  • 13. 2014 EMC Proven Professional Knowledge Sharing 13 the IT director this means less work on isolated green "hobbies", and more joined-up thinking with peers to create sustainable policies. Geoff Lane, partner, sustainable business solutions at accounting and consulting firm Price Waterhouse Coopers and former head of environmental policy at the Confederation of British Industry, reports that more clients are seeking advice on how to benchmark their green activity. "Companies regularly come to us and say, 'We have recycling programs or duplex printing policies in place but we want to raise our game'." A fresh focus for IT departments in all organizations is how to make the data centers environmentally sustainable. Computing power per square foot may have increased with the use of multicore chips and blades, but this concentration of hardware also generates more heat and the problem of how to dissipate it. According to analyst firm Gartner, most large enterprises' IT departments spend about 5% of their total IT budgets on energy. This could rise by two to three times within the next five years. One way to decrease power consumption is to better integrate facilities teams and engineers with the IT department. "The typical engineer does not look past the power supply or the gateway to the IT piece," says Patrick Fogarty, director of consultant engineering practice, Norman Disney and Young, and a speaker at Datacenter Dynamics' Next Big Datacenter Challenge energy summit in London in February. Similarly, the IT team is predisposed to consume all available power to keep its applications running, according to several delegates at the summit. "If we could do it all over again without the legacy datacenter architecture, we would take a more holistic view. From the CPU to the actual transmission, there are a lot of inefficiencies leaked through cabling, plus the massive inefficiencies in the ways we cool IT," says Fogarty. CIOs and their staff in many organizations are turning to the data center to try to plug these leaks. "One of the biggest issues around power consumption in IT is the datacenter," says Ben Booth, global chief technology officer at research firm Ipsos which has data centers scattered around the globe, ranging from vast server farms to machines stuffed into back offices. Booth is
  • 14. 2014 EMC Proven Professional Knowledge Sharing 14 reviewing ways to consolidate these in order to reduce the bill, reduce numbers, and provide a round-the-clock service to customers. Location and Design Considerations “Site selection can mean the difference between a data center costing $3 million or $11 million a year in utility costs". Chien Si Harriman, Energy Modeling expert When a company buys land, it is important to consider the property’s suitability to house a server environment. The most desirable location is one that supports the data center site selection objectives to safeguard data server equipment and accommodate growth and change of the data center. The location of a data center can be as critical as the services offered there. Many companies need access to their servers on a 24/7/365 basis. Data center requirements have changed dramatically in the last few years. Earlier, the main concern in choosing a location for data center revolved around network accessibility, as fiber networks connecting data centers were concentrated in the main cities. This is still a concern. However, the growth of high-speed network infrastructure and remote access capabilities allows data centers to be built in any location. Nevertheless, selecting the physical location of the data center still forms an integral part of data center strategy, keeping in mind a lot of other factors that have a bearing on the total cost of ownership (TCO) of the organization. Data center location continues to play a pivotal role in deciding its operational costs and ensuring its smooth functioning. An improper data center location not only results in high operating costs but may also lead to loss of revenue due to disruptions in data center operations. The other consideration is power. With power consumption on the rise, blackouts are becoming a common problem in areas with poor power infrastructure. As data centers require enormous amounts of power—failure of which can cause unexpected downtime—availability of abundant low cost power is an important consideration. Companies today are looking at alternative sources of energy such as hydroelectric power and biofuel that significantly reduce operational costs.
  • 15. 2014 EMC Proven Professional Knowledge Sharing 15 Power-intelligent data centers are an obvious answer to efficient power management as environmental consciousness has become a key corporate parameter. Besides, data centers built in cooler climates can reduce costs because outside air can be used to keep it cool automatically. Air Management Improving "air management"—optimizing the delivery of cool air and the collection of waste heat —can involve many design and operational practices. Air cooling improvements can often be realized by addressing: • Short-circuiting heated air over the top or around server racks. • Cooled air short-circuiting back to air conditioning units through openings in raised floors such as cable openings or misplaced floor tiles with openings. • Misplacement of raised floor air discharge tiles. • Poor location of computer room air conditioning units. • Inadequate ceiling height or undersized hot air returns plenum. • Air blockages such as often happens with piping or large quantities of cabling under raised floors. • Openings in racks allowing air bypass (“short-circuit”) from hot areas to cold areas. • Poor airflow through racks containing IT equipment due to restrictions in the rack structure. • IT equipment with side or top-air-discharge adjacent to front-to-rear discharge configurations. • Inappropriate under-floor pressurization - either too high or too low. The general goal for achieving better air management should be to minimize or eliminate inadvertent mixing between cooling air supplied to the IT equipment and collection of the hot air rejected from the equipment. Air distribution in a well-designed system can reduce operating costs, reduce investment in HVAC (heating, ventilation, and air conditioning) equipment, allow
  • 16. 2014 EMC Proven Professional Knowledge Sharing 16 for increased utilization, and improve reliability by reducing processing interruptions or equipment degradation due to overheating. Figure 1-4: Data Center Design and Air Flow Solutions to common air distribution problems include: • Use of "hot aisle and cold aisle" arrangements where racks of computers are stacked with the cold inlet sides facing each other and similarly the hot discharge sides facing each other. (Figure 1-3)
  • 17. 2014 EMC Proven Professional Knowledge Sharing 17 • Blanking unused spaces in and between equipment racks. • Careful placement of computer room air conditioners and floor tile openings, often through the use of computational fluid dynamics (CFD) modeling. • Collection of heated air through high overhead plenums or ductwork to efficiently return it to the air handler(s). • Minimizing obstructions to proper airflow. Right-Sizing and Energy Star The U.S. Environmental Protection Agency (EPA) recently extended its Energy Star program to include rack servers. Energy Star is a way to provide forced information about products so that consumers can make better decisions. People might want to choose high-efficiency products, but they don't know how to compare the efficiencies among products in a standardized way. There is a lot of confusion about trying to buy high-efficiency products in the data center. [The Energy Star program] is a huge challenge because it's so difficult to compare two servers as to their function; all the benchmarks for server usage are so hotly disputed between the vendors. [The EPA] has been working on this for about 5 or 6 years. The genesis for doing this came out of a charrette, a group of experts getting together to work on a problem, done by the Rocky Mountain Institute. They brought people together from the data center industry to think about this problem of energy efficiency well in advance of the current wave of interest. They wrote a report where one of the results was to push the government to consider extending Energy Star to servers and other data center related products. I think it's a great idea. I think it can go further. Half of the power that comes in from the utility companies never even makes it to the server. Rather, it's wasted in power systems and cooling systems, the kinds of products that APC makes. We have a huge responsibility to fix that. How would that be measured? Energy Star has other programs besides these product level certifications where they recognize efficient organizations. They try to recognize solutions that are exceptional in their field for performance. It's not benchmarked against others. I think that's a good way to start when we
  • 18. 2014 EMC Proven Professional Knowledge Sharing 18 have a difficult problem; just select solutions that we know have done a better job. As we get better, understand it. I think they should start doing that for complete data center infrastructure designs. If each part is efficient, it doesn't necessarily make the data center efficient. I can still make a bad data center out of efficient parts. Though part-level qualifications are nice, if I take very efficient servers and throw 18 million of them in a data center when I only needed six, I'm going to waste a lot of power. It's not just the parts. We have to look at the complete system. Perhaps something like what LEED (Leadership in Energy and Environmental Design) does? I totally agree with that, because one of things that people are going to find is what will they do with all the junk from the data center after it is retired in 12 years? Can it be recycled? LEED thinks about those kinds of things; recyclability, disposability, etc. A lot of people aren't thinking about that today but the day will come when they will be thinking about it for sure. While today we can throw a lot of things away in a dumpster and get away with it, that is going to be less likely in the future. All of the sudden it's going to be a big deal for people to think about the environmental friendliness of the data center overall, not just energy efficiency. If we look at the opportunity to save energy, most of the opportunity is in bad system design, not in the parts. The key to energy efficiency in the data center is right-sizing. The number one driver of waste in the data center—whether it's servers, power systems, cooling systems—is over-sizing whatever it is we're looking at. And the amount of over-sizing that occurs within data centers at all levels is rampant. People oversize because it's so difficult to scale. Thus, people build a giant thing first and hope that it's going to fill up. But while it's filling up, it's inefficient. Typically, a data center is expected to survive for about 12 years and that number is increasing. If expected to last 12 years, it must be built for a 12- year forward looking goal. That's the logic being used, but how does one know what they're going to need in 12 years. So what do they do? They know that they've been growing, so they over-design these things, make them huge, and start out with a small load of IT equipment and gradually grow it over time. Very inefficient. Storage managers can deploy storage in more energy-efficient ways. If high performance is not needed, deploy 7,200 rpm or 10,000 rpm disks rather than 15,000 rpm models, as the slower speeds use less energy. Similarly, smaller form-factor (2.5-inch) disk drives require only 5 volts
  • 19. 2014 EMC Proven Professional Knowledge Sharing 19 vs. 12 volts for standard 3.5-inch form-factor drives. Small form factors, however, usually have smaller capacity (see "Energy tradeoffs"). Figure 1-5: Energy tradeoffs National Snow and Ice Data Center (NSIDC) Project The Green Data Center project, funded by the National Science Foundation (NSF) under its Academic Research Infrastructure Program, with additional support from NASA, shows NSIDC's commitment to reducing its impact on the environment, and that there is significant opportunity to improve the efficiency of data centers in the U.S. Coolerado and Modeling an Application of the Maisotsenko Cycle written by Benjamin A. Weerts has been approved for the Department of Civil, Environmental and Architectural Engineering University of Colorado Boulder. It included the conclusion of one of the Master's students at Colorado Univeristy (Ben Weerts) who worked on the project. The innovative data center redesign slashed energy consumption for data center cooling by more than 90 percent, demonstrating how other data centers and the technology industry can save energy and reduce carbon emissions. The heart of the design includes new cooling technology that uses 90 percent less energy than traditional air conditioning, and an extensive rooftop solar array that results in a total energy savings of 70 percent. The new evaporative cooling units not only save energy, but also offer lower cost of maintenance.
  • 20. 2014 EMC Proven Professional Knowledge Sharing 20 This project was a successful effort by a consortium of university staff, a utility provider, and Federal research organizations to significantly reduce the energy use of an existing data center through a full retrofit of a traditional air-conditioning system. Cooling energy required to meet the constant load has been reduced by over 70% for summer months and over 90% for winter months. The National Snow and Ice Data Center recently replaced its traditional cooling system with a new air conditioning system that utilizes an economizer and Coolerado air conditioning units of the Maisotsenko cooling cycle. A data logging system was installed that measured the data center's power consumption before and after the cooling system was replaced. This data was organized and used to prove a 90% cooling energy reduction for the NSIDC. The data logging system also collected temperatures and humidity of inlet and outlet air of a Coolerado air conditioner. After using these data to validate a theoretical model developed by researchers at the National Renewable Energy Laboratory, the model was used to simulate slightly modified heat and mass exchanger designs of the Coolerado system to improve performance. Sensitivity analysis was performed and found a few design parameters that are important to the thermodynamic performance of the Coolerado system, while others were proved insignificant. Channel heights, sheet size, and ambient pressure have the most significant impact on performance. Overall, it was found that the current design performs reasonably well and with minor modifications could perform optimally, as suggested by the theoretical model. The key performance indicator used was the coefficient of performance (COP) which is defined in Equation 18. The COP is a robust performance metric that by definition incorporates cooling power, product air temperature, product air flow rate, and total air flow rate. If any of these variables change, the COP will reflect that change. Note that only the product air mass flow rate can be counted in the cooling power. Working air is very humid, almost saturated, and would be uncomfortable in an occupied space; therefore, it is exhausted and not used for any useful cooling. Although it's not a design variable per se, the ambient pressure, or elevation, has the greatest impact on the performance, with over 11% increase at 2200 feet (from Boulder, Colorado’s 5800 feet), and over 21% increase at sea level. It should be noted that the low channel heights’ sensitivity simulation ranks between these two elevation impacts, signifying that channel height is also a major performance driver with over 14% increase in COP compared to the base case as both product and working air channel heights are decreased by 0.5mm (0.02"). This value was selected as a maximum change because Coolerado claims that
  • 21. 2014 EMC Proven Professional Knowledge Sharing 21 changing the channel heights more than this amount requires additional time to dial in the proper flow and speed of the machine used in the manufacturing line. It is also interesting to note that the Low Wet Wall simulation (reduced working air channel height) resulted in more than double the change that the Low Dry Wall simulation showed. This implies that it is more important for the working air channel height to be small and optimally sized for maximum evaporation rate than it is for the product channel height. Project results in terms of energy savings, financial savings, or other environmental or resource use benefits In the past, cooling the computer room required more than 400,000 kWh of energy/year, enough to power 50 homes. With the new system in place, the cooling power levels in the summer of 2011 were reduced by 95%. In the winter, the eight patented indirect evaporative cooling technology (IECT) units operate as highly efficient computer room humidifiers. Using the air economizers enables the data center to meet or exceed all ASHRAE environmental specifications. Electricity savings are estimated to be $54,000/year, not including the solar array. If the owner replaced the existing 30-ton computer room air conditioning (CRAC) unit with a similar unit, the estimated replacement cost would have been about $160,000. The maintenance costs for the CRAC unit were about $15,000/year. The new cooling system cost $250,000, but maintenance costs for the new system will be about $2,000/year. Payback on the cooling system is well under three years, with costs over that time equaling $367,000 for the CRAC compared to $256,000 for the new system. In 2011, the project was presented an award for high-impact research by a local consortium of scientific organizations. The project team was recognized for its innovative data center redesign that slashed data center energy consumption for cooling by more than 90%, demonstrating how similar facilities and the technology industry at large can save energy and reduce carbon emissions. A new web-based monitoring system allows the public to monitor power use in real time. The monitoring system includes temperature, humidity, airflow, and electrical power measurements that enable characterization of the heat gain from the equipment, the heat removed by the cooling system, the energy consumption of the cooling equipment, and the uniformity of conditions within the data center. The solar phase of the project under construction and should be complete by April 2012. Currently under construction, a 50kW solar array, when complete, will make the data center effectively a zero-carbon facility.
  • 22. 2014 EMC Proven Professional Knowledge Sharing 22 Total reduction in carbon emissions or kW used from the original configuration In 2009, the total power used by the data center operations was 120kW. Today, the retrofitted data center uses about 38kW. Once the 50kW solar array is installed, the daytime carbon footprint and energy usage should drop to zero. The old system had an annual average Power Utilization Effectiveness (PUE) of 2.03 (PUE is the total data power divided by computer hardware power used). From August-December 2011, the new system had an average PUE of 1.28. (The PUE would have been lower but for an inefficient UPS system.) Since October 2011, the monthly PUE has been below 1.20 due to the airside economization of cool fall and winter temperatures. The PUE will to continue to decrease through the spring, until outdoor temperatures rise. The new cooling system used less than 2.5 kilowatts of power on average for the month of October 2011. The average cooling power usage is almost 95% less than that used in October 2010 (during which the IT load was slightly higher and outdoor conditions were very similar). The average cooling power during the upcoming winter months of November 2011-February 2012 should be similar because the winters in the region are generally very cool.
  • 23. 2014 EMC Proven Professional Knowledge Sharing 23 Case Study: EMC Durham Cloud Data Center Overview In today’s world, the trend is to use new technologies and products that are green to reduce power consumption. EMC, one of the IT companies working on this initiative to help enhance storage and IT products in going green, selected Durham, North Carolina to build its new cloud data center, a 100% Virtual Data Center. Virtualization is a technique that significantly reduces power consumption and is widely used in all green data centers. Top View of EMC Durham Data Center
  • 24. 2014 EMC Proven Professional Knowledge Sharing 24 Why EMC decided to build Durham Data Center EMC is the latest in a series of leading technology companies that have chosen North Carolina as major data center location, joining companies such as Google, Apple, and Facebook. This new data center will replace the existing data center in Westborough, Massachusetts. EMC migrated more than 6PB from the Westborough location to the new virtualized data center in Durham. Though a formidable challenge, all application migrations were completed in less than two years. Another huge challenge regarding Data Center Kick-Off, Site Selection, Construction, and Application Migration was to finish everything in less than four years. A major reason for building the Durham Cloud Data Center was the high cost of power incurred in the Westborough Data Center. Expense comparison between Westborough and Durham Westborough Durham Owner Rented EMC is the Owner Power $.12/KWH $.05/KWH Space 15,0002 Ft 20,0002 Ft Physical or Virtual Legacy physical 32% Virtual Private Cloud 100% Cloud Providing a 46% power reduction and accommodating an increase in IT resource demand by 97%, the Durham Cloud Data Center represents a perfect example of high energy efficiency to deliver IT as a Service. This data center is designed to meet all requirements for enhancing power consumption by using multiple sustainable design innovation ideas, such as:  Achieving high rate of energy reduction over the existing facilities  Producing exceedingly energy-efficient savings in all operations  Using of outside air (free cooling)
  • 25. 2014 EMC Proven Professional Knowledge Sharing 25 o A cold-aisle containment system to manage airflow within the data center, and flywheel UPS systems to save space and reduce reliance on batteries  Reducing carbon emissions for both the operation and development of the facility.  Reducing or eliminating operational disruption Design and Construction Moving from a physical to a highly-virtualized IT infrastructure is the key for cloud computing. The Durham data center supports the requirements for cloud with architecture that leverages the latest information infrastructure technologies. Built with leading technology solutions from EMC, VMware, and VCE, the 100 percent highly virtualized Durham data center is the foundation for EMC IT’s cloud vision and transition to Infrastructure as a Service (IAAS). It delivers all requirements—flexibility, agility, and scalability—to meet today’s needs and those of the future. EMC concentrated on enhancing and developing a data center with sustainability as a primary design criterion. Energy conservation was a primary focus as were efficiency ideas such as a rooftop water collection system, use of outside air (free air-cooling) for approximately 57 percent of the year, a col-aisle containment system to manage airflow within the data center, and Flywheel Uninterrupted Power Supply system technology used to eliminate needing the storage battery in the UPS systems. EMC kept the exterior of the 45,000 square foot building and is using a three-phased modular, “box within a box” approach to complete the facility 150,000 square feet at a time. Cooling and power are installed independently with each module; Also, cooling and air-handling are built offsite to very tight tolerance and then placed in the premises. Cold aisle containment is a new way used in Durham to increase cooling control and efficiency. Also, although the Durham data center will have many more servers and storage than before, EMC IT managed to save 280 watts per server with virtualization and has reduced power consumption per terabyte of storage by 82 percent.
  • 26. 2014 EMC Proven Professional Knowledge Sharing 26 EMC has chosen VCE vBlock® which integrates VMware software, EMC products, and Cisco UCS blade server and network technologies for Unified VM provisioning and simplified workload management. Durham is the first EMC data center to host all application and platforms on Top Layer “VMware Vphere” as its virtual data center. EMC IT will be able to modify, move, or relocate workloads easily and transparently anywhere within it for maximum performance, utilization, and efficiency. Features  Latest EMC and VMware tools will be used to manage the virtual infrastructure of the data center. An energy-efficient HVAC cooling system combines water cooling and air cooling.  Automatic lighting control systems are implemented enhance energy efficiency.  There are two separate underground pathways to deliver voice and data services; each terminates at one of two main communication rooms for further redundancy. Durham Data Center Architecture It is fully virtualized Data center on VMware vSphere, built using the VCE VBlock converged Infrastructure. This Virtual Infrastructure provides the cloud foundation that will enable workloads to be located and easily moved transparently within the data center for maximum efficiency, performance and utilization.
  • 27. 2014 EMC Proven Professional Knowledge Sharing 27 The data center includes.  Network: 4 Redundant 10 GB/s WAN links, Cisco Nexus 7000 core network  Compute: Cisco UCS x86 servers, 100% virtualized  Storage: EMC Symmetrix® VMAX® , EMC VNX® (VBlock)  Backup: NetWorker® , Data Domain® and Avamar® As seen in the components listed above, EMC uses many backup solutions as backups are created locally in the data center, then replicated to the remote data center to provide high data protection. Migration Process The Durham migration needed to be a showcase for EMC’s private cloud vision. It was a big challenge to migrate all of the data without doing a physical move. Traditionally, data centers are moved the same way you move to a new house. You have to carefully pack and load everything into a lorry (or truck), drive to your new location, and unpack and set up everything again in the new location. This operation is a major disruption. Today, since business runs 24/7, moving the production data center should be non-disruptive. To minimize risk and disruption to the business, EMC decided to migrate and transform a new private cloud, built on VCE vBlock and virtual machines on VMware vSphere. Hardware was not relocated to the new location as part of the data center migration process. All data and applications were moved across four redundant 10 Gb/s WAN links. Many application physical server architectures were rebuilt to new virtualized cloud architectures. Migrating applications and data in a virtual environment minimizes downtime. The migration process is shown in the following diagram.
  • 28. 2014 EMC Proven Professional Knowledge Sharing 28 Awards EMC’s Durham facility is the latest Global Center of Excellence (COE). Others are located in India, China, Egypt, Israel, Ireland, and Russia. COE performs services for EMC business units including Technical Support, Customer Service, Research and Development, Engineering, and IT. In March 2013; EMC COE in Durham was awarded LEED (Leadership in Energy and Environmental Design) GOLD. Highlights of the certificate included recognition of a 34% overall energy saving and reduced carbon footprint of nearly 100,000,000 pounds of CO2. The Certificate also included:  Utilize “Free Cooling” 57 percent of the year  Reduced potable water use of 78 percent
  • 29. 2014 EMC Proven Professional Knowledge Sharing 29 Case Study: “Iceland” Iceland wants to be your haven Imagine having your backup data center powered by hydroelectric and geothermal energy-- water and hot springs. Sounds like something Jules Verne would have written about if he had heard about data centers, right? It would be fitting for Verne to then talk about the Invest in Iceland Agency, because Iceland is where his Journey to the Center of the Earth novel began. The agency, which is part of the Icelandic government, will soon start trying to lure companies from Europe and the U.S. to build their backup data centers in Iceland. The attractions are stable temperatures, a stable economy, and yes, cheap power, thanks to the country's hot springs and waterfalls. "The price can vary highly depending on what kind of quantity and stability in electricity you need," said Thortur Hilmarsson, general manager of the agency. "As an example, a heavy industry with 10-to-15 megawatts can have prices of 3.5 cents per kilowatt hour. If you're a smaller consumer, then you're looking at 6, 7, 8 cents per kilowatt hour." About 72% of Iceland's energy consumption comes from hydroelectric and geothermal energy. The only fossil fuels used are for cars and fishing vessels. Temperatures are cold but stable, between 32 degrees and 55 degrees Fahrenheit throughout the year. Plus, the corporate tax rate in Iceland is about 18%, less than half of the 39% in the U.S., according to figures from the Congressional Budget Office. Hilmarsson said the agency is working on a comprehensive study about just this topic. The study will look at latency issues from Iceland to Europe and Iceland to the U.S. That said, it would still have to convince data centers that it is worth the extra effort to locate there.
  • 30. 2014 EMC Proven Professional Knowledge Sharing 30 Appendix Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom and Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane also teaches the data center facilities course in the Marist College Institute for Data Center Professional program, is a data center power and cooling expert, is widely published, speaks at many industry seminars and is a corresponding member of ASHRAE TC9.9, which publishes a wide range of industry guidelines Clive Longbottom is the co-founder and service director at Quocirca and has been an ITC industry analyst for more than 15 years. Trained as a chemical engineer, he worked on anti- cancer drugs, car catalysts and fuel cells before moving in to IT. He has worked on many office automation projects, as well as Control of Substances Hazardous to Health, document management and knowledge management projects. Snow and Ice Datacenter http://nsidc.org/about/green-data-center/ Snow and Ice data center, Weerts, B., D. Gallaher, R. Weaver, O. Van Geet, 2012: Green Data Center: Energy Reduction Strategies: Airside Economization and Unique Indirect Evaporative Cooling. IEEE Green Technologies Conference Proceedings, Tulsa Uptime Institute http://uptimeinstitute.com/ Uptime Institute, an independent division of The 451 Group, provides education, publications, consulting, certifications, conferences and seminars, independent research, and thought leadership for the enterprise data center industry and for data center professionals. Data center selection by John Rath www.Datacenterlinks.com U.S. Environmental Protection Agency http://www.epa.gov/greenpower/index.htm Site Selection Online http://www.siteselection.com/issues/2002/mar/p118/ Glumac http://www.glumac.com
  • 31. 2014 EMC Proven Professional Knowledge Sharing 31 Leveraging Renewable Energy in Data Centers Present and Future Keynote Summary Ricardo Bianchini Department of Computer Science Rutgers University ricardob@cs.rutgers.edu Air Management and capitalize on free cooling. Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers Steve Greenberg, Evan Mills, and Bill Tschudi, Lawrence Berkeley National Laboratory Peter Rumsey, Rumsey Engineers Bruce Myatt, EYP Mission Critical Facilities Rocky Mountain Institute http://www.rmi.org/sitepages/pid17.php (Copyright ©1990-2012 Rocky Mountain Institute. ® All rights reserved. Computer World http://www.computerweekly.com/news/2240080415/Solving-storage-power- and-cooling-concerns Iceland example http://searchdatacenter.techtarget.com/news/1246999/Iceland-wants-your- backup-data-centers http://smma.com/project/mission-critical/emc-durham-cloud-data-center http://www.datacenterknowledge.com/archives/2011/09/15/emc-opens-new-cloud-data-center- in-nc/ EMC Durham Data center - Energy efficient design and construction itblog@emc.com EMC Durham Cloud DC, Powering EMC IT’s cloud vision ref. https://www.emc.com/about/news/press/2013/20130314-01.htm
  • 32. 2014 EMC Proven Professional Knowledge Sharing 32 Biographies Mohamed Sohail Implementation Delivery Specialist Mohamed has over 9 years of IT experience in operations, implementation, and support; 4 of them with EMC. Mohamed previously worked as a Technical Support Engineer at Oracle Egypt and was a Technical Trainer at Microsoft. Mohamed holds a Bsc. in Computer Science from Rome University Sapienza Italy and a B.A in Italian from Ain Shams University Egypt. Mohamed holds EMC Proven Professional Backup Recovery certification. Denis Canty Principal Test Architect Denis is a Principal Test Architect in the Data Domain BRS division of Global External Manufacturing (GEM). Having been in the IT industry for 9 years, Denis previously worked for Alps Electric, a Japanese automotive electronics company. Denis earned a degree in Electronic Engineering from Cork Institute of Technology (CIT), a Masters in Computer Science from Dublin City University (DCU), and a Masters in Microelectronic Design from University College Cork (UCC). Denis holds EMC Proven Professional Information Storage and Management certification. Omar Aboulfotoh Technical Support Engineer Omar is a Technical Support Engineer working in the Unified Storage Division, Global Technical Support GTS, EMC Egypt COE. He has been in the IT industry for 3 years. He holds a degree in Electronics and Communications Engineering and is currently studying in a Master program at Faculty of Engineering, Cairo University. Omar holds EMC Proven Professional Specialist certification in VNX and SAN and is also VCP and vCloud Certified.
  • 33. 2014 EMC Proven Professional Knowledge Sharing 33 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.