At Red Hat Storage Day New York on 1/19/16, Red Hat's Sayan Saha took attendees through an overview of Red Hat Gluster Storage that included future plans for the product, Red Hat's plans for container storage, and the company's plans for CephFS.
Red Hat Gluster Storage, Container Storage and CephFS Plans
1. Red Hat Gluster Storage, Container
Storage & CephFS Plans
Presenter: Sayan Saha
Date: 19-Jan-2015
2. Presentation Outline
▪RHGS recap, intro and state of the union
▪Looking forward (Core RHGS, 6 – 8 months)
▪Persistent Storage for Containers (RHxS)
▪Hyper-Converged solution with RHEV
▪Public Cloud (AWS, Azure)
▪Red Hat Storage Life Cycle Manager
▪CephFS Plans
▪Q&A
2
4. The Past: Quick Recap
• VM image store, performance & stability
• 6 updates released
• End of Life June 2014
Red Hat
Gluster
Storage 2.0
• Quota, Geo-Replication, UI, SMB 2.0
• 6 updates released
• End of Life Oct 2015
Red Hat
Gluster
Storage 2.1
• Snapshots, Nagios, SNMP, Rolling Upgrade,
RDMA, 3-way replication, small file
performance enhancements, SELinux, SSL
encryption
Red Hat
Gluster
Storage 3+
4
5. The Present: RHGS 3.1 – Shipping Now
Red Hat Confidential5
The big 3 (fully supported)…
● Erasure Coding
○ Data protection without using RAID & replication
● Bit Rot Detection
○ Protection against “Silent Data Corruption”
● Active-Active NFSv4
○ Secure, scalable, performant, table stakes
Robust Enterprise Class SDS Capabilities
6. RHGS - Erasure Coding
▪ Data protection without using
RAID & replication
▪ Break data into smaller fragments,
store and recover from a smaller
number of fragments
▪ New type of volumes: Dispersed,
dist-Dispersed
▪ Algorithm used is REED-Solomon
▪ Initial tested configurations: 8+3,
8+4 & 4+2
= Overall capacity reduction
= Faster rebuild times
Red Hat Confidential6
7. RHGS - Bit Rot Detection
▪ Protection against “Silent Data
Corruption”
▪ Two fundamental procedures
–Signing using SHA256
–Scanning/scrubbing for rot
▪ Lazy checksum maintenance
(not inline to data path)
▪ Checksum calculation
undertaken when a file is
considered “stable”.
▪ Bit-rot scanning mode is admin
selectable to control
performance impact
Red Hat Confidential7
8. RHGS - Multi-headed NFSv4 (Active-Active)
▪ NFSv4 ACLs
▪ Security
– Kerberos authentication using
RPCSEC_GSS, krb5/i/p, spkm3
▪ Active/Active cluster-on-cluster
– Up to 16 A/A NFS heads
▪ RHGS pool scales-out as usual
▪ Ability to add & delete RHGS
volume exports to nfs-ganesha
at run-time
Red Hat Confidential8
9. RHGS – Tiering (tech preview)
▪ Automated data movement
between hot & cold tiers
▪ Movement based on access
frequency
–Hot tiers could be SSDs,
cold tiers are normal disks
▪ Attach & detach a tier to and
from an existing Gluster volume
–Initial I/Os forwarded to hot
tier
–I/O misses promotes data to
hot tier
Red Hat Confidential9
11. STORAGE TRENDS:
MODERN IT INFRASTRUCTURES
11
Traditional Storage Next Generation Storage
Manual provisioning of LUNs and
volumes with some degree of
automation
Self-service provisioning by lines of
businesses and application developers
Static selection
Static selection of storage
platforms based
on application needs
Catalog based storage service offerings with
metering & charge-back
Scale-up with some scale-out.
Costly migrations.
Expand, Shrink and scale on demand. Easier
upgrade.
Little to no flexibility in selecting
optimum storage back-end for
workloads
Policy based storage back-end selection
12. Key Trends’ Impact on Gluster
▪Consumption Model
– API based dynamic provisioning, healing, tuning &
balancing
– Secure multi-tenancy
– Cloud scale & stability at scale
▪Performance: performant storage back-end for a
wide variety of workloads
▪Advanced data services: tiering, compression, de-
duplication
12
13. RHGS - Core Roadmap
Red Hat Confidential13
Feb’ 16 April/May’ 16 Q4 CY 2016
3.1.2
Core
- Tiering full support
- Writable Snapshots
- SMB Perf
Enhancements
- Bit Rot Status
Mgmt
- Dynamic Volume
allocation (Preview)
- RHGS-C offline install
- Docker image support
3.1.3
Core
- Sharded volume support
for VM image store
including hyper-
convergence
- Arbiter Quorum
- Refresh Swift APIs to
OpenStack Kilo
Protocols
- SMB 3.0 multi-channel
support
Perf
• Multi-threaded self-heal
3.2
Core
- Subdirectory level
exports with FUSE
(multi-tenancy)
Protocols
- NFSv4 delegations
Mgmt
- REST APIs
- New Storage Life
Cycle Manager
14. Roadmap Feature Details
14
3.1.2 Tiering HSM like, tiered volumes, promote-demote based on
access frequency
Writable Snapshots Create a share from a snapshot
Dynamic Volume Provisioning Allocate Gluster volumes programmatically and
dynamically
SMB performance enhancements Using Async I/O and other enhancements
Bit Rot Status Interactive CLI listing impacted files
RHGS-C offline install Provide OVA image
Docker Image Support Official docker image in Red Hat’s Container Registry
3.1.3 Sharding Sharded-replicated volumes for hyper-converged VM
storage
Arbiter Quorum Reliable quorum without 3-way replication penalty
SMB 3.0 multi-channel Network fault tolerance & performance
16. RHGS + RHCS & Containers - Phase 1
▪ Persistent data store for
containers in independent
compute and storage
clusters
▪ Fully support RHGS &
RHCS as a storage
backend for OSE & AEP
▪ Available Now!
16
17. RHGS & Containers - Phase 2
▪ Run containerized storage
+ containerized compute
hyper-converged from
same set of hosts
▪ Use Kubernetes to
provide unified control
plane for compute and
storage
▪ Availability with OSE 3.2,
3.3
▪ Rinse & Repeat with
RHCS
17
20. RHGS - Hyper-Converged ROBO Solution
▪ A planned low footprint
storage/compute offering, currently
under development, integrating
RHEL, RHGS, & RHEV-M
▪ Simplified acquisition, deployment,
and management
▪ Support planned for wide range of
workloads
▪ Currently in customer Pilot
programs
▪ Post RHEV 3.6 GA
20
Remote Replication
22. Public Cloud Availability & Plans
▪RHGS available via Cloud Access in AWS & Azure
–BYOS model. Fully supported by RHT.
–Allows customers to use RHGS in the Cloud (existing use-
case or net new)
–Lift & Shift existing applications to Cloud without re-
writing the app (POSIX compatible file store)
▪Exploring Cloud marketplaces and on-demand
pricing for cloud consumers
22
24. Life Cycle Manager for Red Hat Storage
Deploy, Config, Manage,
Monitor
– Consistent UX across
storage portfolio
– Streamlined workflows
– Integration / leverage
other RHT tools & projects
– Integration with manager
of managers like RHT
CloudForms
26. Product Profile...
2
6
▪ An independent product with it’s own maint stream and
support lifecycle (3yr)
– distinctly versioned from storage products it manages
▪ Included with existing SKUs (entitlements)
– Existing SKUs of the storage providers will be modified
to include
▪ Conceptually a “layered product” on top of the storage
providers
▪ OS Support: RHEL7 mgmt station, Ubuntu Ceph nodes
(agents)
28. Use Case: OpenStack Manila Backend (Phase 1)
▪CephFS will be tech
preview in RHCS 2.0.
FUSE client will ship
then.
▪Kernel client in RHEL 7.3
▪Ship native driver with
RHELOSP 9 for Manila
▪Tenant VMs directly
access the storage
network
28
29. Secure Plumbing (Phase 2)
▪VMware vsockets
designed for VM to host
communication
▪Mount CephFS on host
+ kNFS or run NFS-
Ganesha
▪Export to VM’s VSOCK
address
▪Tenant does not access
storage net
29