Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

StorPool & OpenNebula

500 vues

Publié le

Presentation by Boyan Krosnov showing the latest integrations and use cases regarding StorPool and OpenNebula.

Publié dans : Technologie
  • Soyez le premier à commenter

  • Soyez le premier à aimer ceci

StorPool & OpenNebula

  1. 1. StorPool & OpenNebula @ OpenNebula TechDay Sofia 2018 1
  2. 2. Boyan Krosnov ● Chief of Product ● Cloud architect ● "External technology" ● Previously service providers, networks, packet processing (SDN) ● Algorithmic and low level optimizations :) 2
  3. 3. StorPool ● Fast and efficient software-defined storage system ● Used by MSPs, Cloud service providers, Hosting companies and in private clouds ● Started in 2011 to solve storage for cloud service providers ● Clean slate design - scale-out, API-controlled, end-to-end data integrity, CoW 3
  4. 4. StorPool ● Most deployments are with KVM ○ some Xen, some VMWare, some Hyper-V ○ some bare metal / dedicated server ● Deep integrations into OpenNebula, OnApp, OpenStack, CloudStack 4
  5. 5. StorPool + OpenNebula https://github.com/opennebula/addon-storpool/ Each image is a separate volume (LUN) in the storage system. Quick demo
  6. 6. New for last 12 months ● Import of *.VMDK(VMware) and *.VHDX (HyperV) images from local path ● reserved.sh - tool to help tweaking the overcommitment of host's CPU and RAM to address the use of cgroups ● alternate function for the 'VM snapshot' interface with atomic VM disks snapshots (feature following StorPool's development) ● Snapshots limiting - the addon will reject to create new snapshots when predefined number is reached (separate for disk snapshot and 'VM snapshot') ● Support co-existence of different OpenNebulas on same StorPool cluster (Use case : production and staging environments)
  7. 7. New for last 12 months ● Registration of a single disk from given VM snapshot as a new image in the IMAGE datastore by the VM owner user or oneadmin user. ● The oneadmin user can use the interface to import *any* StorPool snapshot as a new image in the IMAGE datastore ● vmTweakHypervEnlightenments.py - tool to do additional tweaks to the VM's domain XML file before VM deployment. HyperV enlightments, ioeventfd, virtio-scsi nqueues. ● various other tweaks (like disabling kernel caching...)
  8. 8. Public and Private clouds our experience 9
  9. 9. Use-cases ● Private Cloud for SaaS (production) ● Private Cloud software development (test/dev) ● Public Cloud ● MSP
  10. 10. Use-cases ● Different requirements = different deployment ● Hyperconverged or stand-alone ● HDDs or SSDs or NVMe ● 10G or 25G ● Latest generation Xeons or E5 v2 from 2013
  11. 11. Hyper-converged: example hardware 12 900 dedicated vCPUs, 3500 GB RAM in 3 RU dual 10G networking SATA SSD storage pool $1000 per VM with 32GB RAM, 8 vCPUs
  12. 12. Hyper-converged: component selection 13
  13. 13. Hyper-converged: component selection 14
  14. 14. Modern server: 28-40 cores; 384GB+ RAM 5 - 15 % Hyper-converged: Efficiency StorPool takes 3 cores & 8 GB RAM < 10% of compute node resources 15
  15. 15. When hyper-converged Hyperconverged: ● Green-field ● KVM virtualization ● Building pods / small availability zones Stand-alone storage system: ● Retrofit ● Other hypervisors: Xen, VMWare, Hyper-V ● Requirements for independence of infrastructure (I don't want an issue in my KVM environment to affect my VMWare environment) ● Requirement for administrative boundary - e.g. additional storage to dedicated server via iSCSI ● Building datacenter-scale infrastructure. Cost of high bandwidth leaf-spine network (3x) justified
  16. 16. Demo #2 50 ㎲ writes! $/IOPS - <$15 000 hardware = 1M IOPS $/GB - > 30TB capacity 3x NVMe approx $1.90/GB > 20 TB StorPool SSD-Hybrid approx $1.00/GB 84%16%
  17. 17. © 2018 StorPool. All rights reserved. Boyan Krosnov StorPool Storage bk@storpool.com www.storpool.com @storpool Thank you

×