A simple setup to build a private or public cloud.
A cloud at the IaaS layer is simply a cluster of hypervisors with some added storage infrastructure and software to orchestrate everything. In this presentation we show some straightfoward DELL hardware that could be purchased to build a single rack as the basic for a private or public cloud. It totals $100k and coupled with open source software: cloudstack, ceph, glusterfs, nfs etc is the basis for your cloud.
You will get a AWS compatible cloud in no-time and with limited acquisition cost.
Boost Fertility New Invention Ups Success Rates.pdf
MyCloud for $100k
1. My $100k Cloud
Sebastien Goasguen - Citrix
Michael Fenn - D. E Shaw Research
Oct 1st 2012
2. Goal
• Build a rack that can act as a private/public cloud
• IaaS implementation from hardware to software
• Entry level system for SME /Academic research /
POC
• Main capability: Provision/Manage virtual
machines on-demand, AWS compliant
3. Assumptions
• There is a machine room to put this rack
• We choose DELL as a vendor for no other
reason than familiarity and our hope that we
can get a 33% discount on the list price
• We are going to use CloudStack as cloud
platform solution.
• And use other open source software for
configuration, storage and monitoring.
4. Head node
Head + storage node: Dell R720xd
(2U)
2 Intel Xeon E5-2650 2.30 Ghz
8x 8 GB RDIMM (64 GB RAM)
12x 2TB NL-SAS Hot Plug (24 TB)
Quad-port Broadcom 5720 1Gb
Dual Hot-Plug Redundant Power
Supply
Per node cost w/ discount: $9,500
7. Rack and PDUs
• Standard air cooled rack. The
DELL 4220 rack would be a good
choice.
• The whole solution should draw
around 6kW, so an 8kW UPS
would be a good fit. APC has
one called the Smart-UPS RT
10000.
8. Total Budget
• Networking (1 unit) = $5,000
• Head node (1 unit) = $9,500
• Compute nodes (21 units) = $73,500
• Rack + power infrastructure = $10,000
• Total: $98,000
• Total: 264 cores, 736 GB of RAM, and 109 TB
of storage, on 25 Us
9. Software setup
• OS: RHEL like, and since we used to work for High
Energy Physics we will choose Scientific Linux 6.3.
Not supported by CloudStack but it does work.
• Hypervisor: KVM or Xen depending on local
expertise
• Cloud Platform: Apache CloudStack
10. Software setup
• Storage: NFS for image store for ease of setup.
GlusterFS for primary storage or local mount point
depending on expertise.
• Configuration management: Puppet or Chef
• Monitoring: Zenoss core with CloudStack ZenPack
11. CloudStack History
• Original company VMOPs (2008)
• Open source (GPLv3) as CloudStack
• Acquired by Citrix (July 2011)
• Relicensed under ASL v2 April 3, 2012
• Accepted as Apache Incubating Project April
16, 2012 (http://www.cloudstack.org)
• First Apache (ACS 4.0) coming really soon !!
12. Multiple Contributors
Sungard: Seven
developers have joined
the incubating project
Schuberg Philis: Big
contribution in
building/packaging and
Nicira support
Go Daddy: Maven
building
Caringo: Support for own
object store
Basho: Support for
RiackCS
13. Terminology
Zone: Availability zone,
aka Regions. Could be
worldwide. Different data
centers
Pods: Racks or aisles in a
data center
Clusters: Group of
machines with a common
type of Hypervisor
Host: A Single server
Primary Storage: Shared
storage across a cluster
Secondary Storage:
Shared storage in a single
Zone
14. “Logical” CS deployment
• Farm of hypervisors. Primary storage available
“cluster” wide for running VMs
• Separate secondary storage to store VM images
and data volumes.
16. Economy
• We have 252 cores of hypervisors
• If we consider overprovisioning of 2 VMs per
core, full capacity is 504 VMs.
• At $0.10 per hour for small instances, we need
1M hours to get back our $100k.
• 1M/(480*24) = 83
• 83 days to recover the capital investment
17. Optional Setup
• Dual GigE cards allows us to do NIC bonding.
• Or to create a separate management network
or storage network if need be.
• First deployment should use CloudStack
security groups (to avoid having to configure
VLANs on the switch). Second deployment
could try to use VLANs.
• Run an openflow controller on the head node
and experiment with SDN, using Open Vswitch
on the nodes.
18. Possible Expansion with more $$
• Fill the rack with nodes to be used as
hypervisors –no change to the software setup,
just add hosts in CloudStack-.
• Fill the rack with GPU nodes for HPC. –add
hosts in CloudStack using the baremetal
component PXE/IPMI -.
• Fill the rack with storage nodes setup as a
hadoop cluster on bare metal
• Fill the rack with SSD base storage nodes
19. “Bare Metal” Hybrid deployment
• Hypervisor cluster, bare metal cluster with
specialized hardware (e.g GPUs) or software
(Hadoop).
20. Info
• Apache incubator project
• http://www.cloudstack.org
• #cloudstack on irc.freenode.net
• @cloudstack on Twitter
• http://www.slideshare.net/cloudstack
• http://cloudstack.org/discuss/mailing-lists.html
Welcoming contributions and feedback, Join the fun !