2. CloudFounders
vRUN
Converged infrastructure that
combines the benefits of the
hyperconverged approach yet
offers independent compute
and storage scaling.
Open vStorage
Core Storage Technology
FlexCloud
A hosted private cloud based
on the vRun technology
available at multiple data
centers world-wide.
A product by CloudFounders
3. 2 Types of Storage
Block Storage:
•EMC, Netapp, ...
•Virtual Machines
•High perfomance, low latency
•Small capacity, typically fixed size
•Expensive
•Zero-copy snapshots, linked clones
•$/IOPS
Object Storage:
•Swift, Cleversafe, ...
•Unstructured data
•Low performance, high latency
•Large capacity, scalable
•Inexpensive, commodity hardware
•No high-end datamanagement features
•$/GB
What is needed is a technology which offers Virtual Machines the
performance and high-end features of a SAN but also the benefits
of the low cost and scale-out capabilities of object storage!
What is needed is a technology which offers Virtual Machines the
performance and high-end features of a SAN but also the benefits
of the low cost and scale-out capabilities of object storage!
4. What is Open vStorage
Open vStorage is an open-source software solution which creates a VM-centric, reliable,
scale out and high performance storage layer for OpenStack Virtual Machines on top of
object storage or a pool of Kinetic drives.
6. The architecture
OpenStack
Scale-outVM VM
VM VM
SSDSSD
SSDSSD
Open
vStorage
Open
vStorage
OpenStack
VM VM
VM VM
SSDSSD
SSDSSD
Open
vStorage
Open
vStorage
OpenStack
VM VM
VM VM
SSDSSD
SSDSSD
Open
vStorage
Open
vStorage
Unified Namespace
S3 compatible Object Storage
or a pool of Kinectic Drives
S3 compatible Object Storage
or a pool of Kinectic Drives
Tier 1 - Location Based
•Read/Write cache on SSD
•Block based storage
•Thin provisioning
•VM Centric
•Distributed Transaction Log
Tier 1 - Location Based
•Read/Write cache on SSD
•Block based storage
•Thin provisioning
•VM Centric
•Distributed Transaction Log
Tier 2 -Time Based
•Zero Copy Snapshot
•Zero Copy Cloning
•Continuous data protection
•Redundant storage
•Scale-out
Tier 2 -Time Based
•Zero Copy Snapshot
•Zero Copy Cloning
•Continuous data protection
•Redundant storage
•Scale-out
7. Optimized storage architecture
Powered by
Memory & SSD
Deduplicated Read Cache:
more effective use of Tier 1 storage.
Zero copy cloning
with linked clones
Thin Provisioning
Offload storage maintenance
tasks to the Tier 2
8. Unlimited scalability
Grow storage performance
by adding more SSDs
Grow storage capacity
by adding more disks
Asymmetric scalability
of CPU and storage
No bottlenecks
No dual controllers
9. Hyper Reliability
More reliable than raid5
Supports Live MigrationZero-shared architecture
Synchronized Distributed
Transaction Log
Unlimited snapshots,
longer retentions
10. Changes in Open vStorage 2 (End Q1 2015)
• Improved performance (x3) by tight integration with QEMU
– 50-70k iops per host
– Removes the NFS & FUSE performance loss
• Improved hardening against failure
– Seamless volume migration (no metadata rebuild)
– Limited impact of SSD failure
• Support for Seagate Kinetic drives as storage backend
– Encryption, compression, forward error correction
– Manage a pool of Kinetic drives as Tier2 storage
• Focus on OpenStack/KVM
11. Deduplicated Clustered Tier One (A pool of Flash)
Futher down the road ...
• Distributed Clustered Tier One
– Uses SSDs across the env. as 1 big shared, deduplicated Tier 1 read cache.
– Speed comparable with an All-Flash array: almost all VM I/O will be from flash.
– Scale storage performance by adding more SSDs.
– Limits impact of an SSD failure. Hot cache in case of Live Migration.
OpenStack
VM VM
VM VM
OpenStack
VM VM
VM VM
OpenStack
VM VM
VM VM
SSDSSD
SSDSSD
SSDSSD
SSDSSD
SSDSSD
SSDSSD
SSDSSD
SSDSSD
hashhash 4k block4k block
hashhash 4k block4k block
hashhash 4k block4k block
Scale-out
13. 2 Types of Storage
Block Storage:
•EMC, Netapp, ...
•Virtual Machines
•High perfomance, low latency
•Small capacity, typically fixed size
•Expensive
•Zero-copy snapshots, linked clones
•$/IOPS
Object Storage:
•Swift, Cleversafe, ...
•Unstructured data
•Low performance, high latency
•Large capacity, scalable
•Inexpensive, commodity hardware
•No high-end datamanagement features
•$/GB
What is needed is a technology which offers Virtual Machines the
performance and high-end features of a SAN but also the benefits
of the low cost and scale-out capabilities of object storage!
What is needed is a technology which offers Virtual Machines the
performance and high-end features of a SAN but also the benefits
of the low cost and scale-out capabilities of object storage!
14. OpenStack Swift: some highlights
• Designed to store unstructered data in a cost-effictive way
– Use low cost, large capacity SATA disks
– Increase capacity by adding more disk/servers when needed
– Increase performance by adding spindles/proxies
• High reliability by distributing content across disks
– 3 way replication
– Erasure coding (on the roadmap)
• Easy to manage (no knowledge needed about RAID or
volumes)
ProxyProxy ProxyProxy
Storage
Node
Storage
Node
Storage
Node
Storage
Node
Storage
Node
Storage
Node
15. Cinder: some highlights
• Cinder provides an infrastructure/API for managing volumes on OpenStack.
– Volume create, delete, list, show, attach, detach, extend
– Snapshot create, delete, list, show
– Backups create, restore, delete, list, show
– Manage volume types, quotas
– Migration
• By default Cinder uses local disks but plugins allow additional storage solutions to be
used:
– External appliances: EMC, Netapp, SolidFire
– Software solutions: GlusterFS, Ceph, …
16. Cinder with local disks has some problems ...
I S C S I
NovaCinder
ManagementN
ightmare!
17. A traditional OpenStack setup
Nova
Instance
Management
Nova
Instance
Management
Swift
Object Storage
Swift
Object Storage
Cinder
Block Storage
Cinder
Block Storage
Glance
Image store
Glance
Image store
VMVM
Provides
volume for
Provisions
Stores
image in
Stores backups in
Provides
image for
SAN,
NAS, ...
SAN,
NAS, ...
Provides
disk space
2 storage platforms?!2 storage platforms?!
18. “Swift under Cinder”?
• Eventual consistency (the CAP Theorem)
• Latency & performance
– VMs require low latency and high performance
– Object stores are developed to contain lots of data
(large disks, low performance)
– Additional latency as Object Store is on the Local LAN instead of attached to the host like DAS
• Different Management Paradigms
– Object Stores understand Objects <> Hypervisors understand blocks, files
19. Open vStorage & OpenStack
Nova
Instance
Management
Nova
Instance
Management
Swift
Object Storage
Swift
Object Storage
Cinder
Block Storage
Cinder
Block Storage
Glance
Image store
Glance
Image store
VMVM
Provides
volume for
Provisions
Stores
image in
Stores backups in
Provides
image for
Provides
disk spaceOpen vStorageOpen vStorageConverts Object Storage
into Block Storage
21. Get the software
• Open vStorage as open-source software is released under the Apache License, Version 2.0
– Free to use even in commercial products
– Open and free community help-forum : https://groups.google.com/forum/?hl=en#!
forum/open-vstorage
– You can contribute: https://bitbucket.org/openvstorage/
• Actively building a community
– Port of Open vStorage to CentOS
– Bug reporting and fixing
– Provide POC for new features
– ...
22. • To be released end Q1 2015
– Storage appliance + Open vStorage storage software
– Start package: 48TB storage + license for Open vStorage
(No restriction on amount of RAM, CPU and SSDs)
– Supported Open vStorage version
– Monitoring, support and maintenance included
– Low cost pricing
– ...
Open vStorage Based Storage Solution
23. • To be released end Q1 2015
– Converged OpenStack Solution based on Kinetic drives
– Starter package: 4 compute nodes, 48TB storage
– Supported Open vStorage version
– Supported OpenStack version
– Monitoring, support and maintenance included
– Low cost:
• 50% lower than EVO:RAIL
• 50% lower than Nutanix
Open vStorage Based Converged Solution
25. Open vStorage Summary
• +50.000 IOPS per Hypervisor
• Made for OpenStack Virtual Machines
• Unified Namespace
• Ultra Reliable
• Unlimited Snapshots
• Endless scalability for both capacity and storage
performance
• Lowest Management Cost In Market
S3 compatible
Object Based
Storage
S3 compatible
Object Based
Storage
Hypervisor
Open v StorageOpen v Storage
Hypervisor
Open v StorageOpen v Storage
Hypervisor
Open v StorageOpen v Storage
Hypervisor
Open vStorageOpen vStorage
Pool of
Kinetic Drives
Pool of
Kinetic Drives
30. Live Motion – In depth (Phase 2)
VSA 1 VSA 2 VSA 3
Arakoon – (config params, metadata, ...)
vDisk
1
vDisk
1
vDisk
2
vDisk
2
Internal
Bucket
vDisk
3
vDisk
3
VFS2 VFS3
xml
VOL
DRV
VOL
DRV
FILE
DRV
FILE
DRV
VM
VOL
DRV
Object Router
FILE
DRV
KVM1 KVM2 KVM3
VFS1
VMLive Motion
Object Router Handover Object Router
31. How does Open vStorage solve the problem
• Open vStorage is a middleware layer in between the hypervisor and the object store.
(Converts object storage into block storage)
– On the host: location based storage (block storage).
– On the backend: time based storage (ideal for objects stores).
– Open vStorage turns a volume into a single bucket.
• OpenStack Cinder Plugin for easy integration (snapshots, ...).
• Distributed file systems don’t work! Open vStorage is not a distributed file sysem!
– All hosts ‘think’ they see the same virtual file systems.
– Volume is ‘live’ on 1 host instead of all hosts.
– Only the virtual file system metadata is distributed.
• Caching inside the host fixes impedance mismatch between slow, high latency backend
and fast, low latency requirement of Virtual Machines.