SlideShare une entreprise Scribd logo
1  sur  52
Télécharger pour lire hors ligne
1
CEPH DATA SERVICES
IN A MULTI- AND HYBRID CLOUD WORLD
Sage Weil - Red Hat
OpenStack Summit - 2018.11.15
2
● Ceph
● Data services
● Block
● File
● Object
● Edge
● Future
OUTLINE
3
UNIFIED STORAGE PLATFORM
RGW
S3 and Swift
object storage
LIBRADOS
Low-level storage API
RADOS
Reliable, elastic, highly-available distributed storage layer with
replication and erasure coding
RBD
Virtual block device
with robust feature set
CEPHFS
Distributed network
file system
OBJECT BLOCK FILE
4
RELEASE SCHEDULE
12.2.z
13.2.z
Luminous
Aug 2017
Mimic
May 2018
WE ARE
HERE
● Stable, named release every 9 months
● Backports for 2 releases
● Upgrade up to 2 releases at a time
● (e.g., Luminous → Nautilus, Mimic → Octopus)
14.2.z
Nautilus
Feb 2019
15.2.z
Octopus
Nov 2019
5
FOUR CEPH PRIORITIES
Usability and management
Performance
Container platforms
Multi- and hybrid cloud
6
MOTIVATION - DATA SERVICES
7
A CLOUDY FUTURE
● IT organizations today
○ Multiple private data centers
○ Multiple public cloud services
● It’s getting cloudier
○ “On premise” → private cloud
○ Self-service IT resources, provisioned on demand by developers and business units
● Next generation of cloud-native applications will span clouds
● “Stateless microservices” are great, but real applications have state.
8
● Data placement and portability
○ Where should I store this data?
○ How can I move this data set to a new tier or new site?
○ Seamlessly, without interrupting applications?
● Introspection
○ What data am I storing? For whom? Where? For how long?
○ Search, metrics, insights
● Policy-driven data management
○ Lifecycle management
○ Conformance: constrain placement, retention, etc. (e.g., HIPAA, GPDR)
○ Optimize placement based on cost or performance
○ Automation
DATA SERVICES
9
● Data sets are tied to applications
○ When the data moves, the application often should (or must) move too
● Container platforms are key
○ Automated application (re)provisioning
○ “Operators” to manage coordinated migration of state and applications that consume it
MORE THAN JUST DATA
10
● Multi-tier
○ Different storage for different data
● Mobility
○ Move an application and its data between sites with minimal (or no) availability interruption
○ Maybe an entire site, but usually a small piece of a site
● Disaster recovery
○ Tolerate a site-wide failure; reinstantiate data and app in a new site quickly
○ Point-in-time consistency with bounded latency (bounded data loss)
● Stretch
○ Tolerate site outage without compromising data availability
○ Synchronous replication (no data loss) or async replication (different consistency model)
● Edge
○ Small (e.g., telco POP) and/or semi-connected sites (e.g., autonomous vehicle)
DATA USE SCENARIOS
11
BLOCK STORAGE
12
HOW WE USE BLOCK
● Virtual disk device
● Exclusive access by nature (with few exceptions)
● Strong consistency required
● Performance sensitive
● Basic feature set
○ Read, write, flush, maybe resize
○ Snapshots (read-only) or clones (read/write)
■ Point-in-time consistent
● Often self-service provisioning
○ via Cinder in OpenStack
○ via Persistent Volume (PV) abstraction in Kubernetes
Block device
XFS, ext4, whatever
Applications
13
RBD - TIERING WITH RADOS POOLS
CEPH STORAGE CLUSTER
SSD 2x POOL HDD 3x POOL SSD EC 6+3 POOL
FSFS
KRBD librbd
✓ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
❏ Edge
KVM
14
RBD - LIVE IMAGE MIGRATION
CEPH STORAGE CLUSTER
SSD 2x POOL HDD 3x POOL SSD EC 6+3 POOL
FSFS
KRBD librbd
✓ Multi-tier
✓ Mobility
❏ DR
❏ Stretch
❏ Edge
KVM
● New in Nautilus
15
SITE BSITE A
RBD - STRETCH
STRETCH CEPH STORAGE CLUSTER
STRETCH POOL
FS
KRBD
WAN link
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
● Apps can move
● Data can’t - it’s everywhere
● Performance is compromised
○ Need fat and low latency pipes
16
SITE BSITE A
RBD - STRETCH WITH TIERS
STRETCH CEPH STORAGE CLUSTER
STRETCH POOL
FS
KRBD
WAN link
✓ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
● Create site-local pools for performance
sensitive apps
A POOL B POOL
17
SITE BSITE A
RBD - STRETCH WITH MIGRATION
STRETCH CEPH STORAGE CLUSTER
STRETCH POOL
WAN link
✓ Multi-tier
✓ Mobility
✓ DR
✓ Stretch
❏ Edge
● Live migrate images between pools
● Maybe even live migrate your app VM?
A POOL B POOL
FS
KRBD
18
● Network latency is critical
○ Low latency for performance
○ Requires nearby sites, limiting usefulness
● Bandwidth too
○ Must be able to sustain rebuild data rates
● Relatively inflexible
○ Single cluster spans all locations
○ Cannot “join” existing clusters
● High level of coupling
○ Single (software) failure domain for all
sites
STRETCH IS SKETCH
19
RBD ASYNC MIRRORING
CEPH CLUSTER A
SSD 3x POOL
PRIMARY
CEPH CLUSTER B
HDD 3x POOL
BACKUP
WAN link
Asynchronous mirroring
FS
librbd
● Asynchronously mirror writes
● Small performance overhead at primary
○ Mitigate with SSD pool for RBD journal
● Configurable time delay for backup
KVM
20
● On primary failure
○ Backup is point-in-time consistent
○ Lose only last few seconds of writes
○ VM can restart in new site
● If primary recovers,
○ Option to resync and “fail back”
RBD ASYNC MIRRORING
CEPH CLUSTER A
SSD 3x POOL
CEPH CLUSTER B
HDD 3x POOL
WAN link
Asynchronous mirroring
FS
librbd
DIVERGENT PRIMARY
❏ Multi-tier
❏ Mobility
✓ DR
❏ Stretch
❏ Edge
KVM
21
● Ocata
○ Cinder RBD replication driver
● Queens
○ ceph-ansible deployment of rbd-mirror via
TripleO
● Rocky
○ Failover and fail-back operations
● Gaps
○ Deployment and configuration tooling
○ Cannot replicate multi-attach volumes
○ Nova attachments are lost on failover
RBD MIRRORING IN CINDER
22
● Hard for IaaS layer to reprovision app in new site
● Storage layer can’t solve it on its own either
● Need automated, declarative, structured specification for entire app stack...
MISSING LINK: APPLICATION ORCHESTRATION
23
FILE STORAGE
24
● Stable since Kraken
● Multi-MDS stable since Luminous
● Snapshots stable since Mimic
● Support for multiple RADOS data pools
● Provisioning via OpenStack Manila and Kubernetes
● Fully awesome
CEPHFS STATUS
✓ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
❏ Edge
25
● We can stretch CephFS just like RBD pools
● It has the same limitations as RBD
○ Latency → lower performance
○ Limited by geography
○ Big (software) failure domain
● Also,
○ MDS latency is critical for file workloads
○ ceph-mds daemons be running in one site or another
● What can we do with CephFS across multiple clusters?
CEPHFS - STRETCH?
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
26
7. A: create snap S3
● CephFS snapshots provide
○ point-in-time consistency
○ granularity (any directory in the system)
● CephFS rstats provide
○ rctime to efficiently find changes
● rsync provides
○ efficient file transfer
● Time bounds on order of minutes
● Gaps and TODO
○ “rstat flush” coming in Nautilus
■ Xuehan Xu @ Qihoo 360
○ rsync support for CephFS rstats
○ scripting / tooling
CEPHFS - SNAP MIRRORING
❏ Multi-tier
❏ Mobility
✓ DR
❏ Stretch
❏ Edge
time
1. A: create snap S1
2. rsync A→B
3. B: create snap S1
4. A: create snap S2
5. rsync A→B
6. B: create S2
8. rsync A→B
9. B: create S3
S1
S2
S3
S1
S2
SITE A SITE B
27
● Yes.
● Sometimes.
● Some geo-replication DR features are built on rsync...
○ Consistent view of individual files,
○ Lack point-in-time consistency between files
● Some (many?) applications are not picky about cross-file consistency...
○ Content stores
○ Casual usage without multi-site modification of the same files
DO WE NEED POINT-IN-TIME FOR FILE?
28
● Many humans love Dropbox / NextCloud / etc.
○ Ad hoc replication of directories to any computer
○ Archive of past revisions of every file
○ Offline access to files is extremely convenient and fast
● Disconnected operation and asynchronous replication leads to conflicts
○ Usually a pop-up in GUI
● Automated conflict resolution is usually good enough
○ e.g., newest timestamp wins
○ Humans are happy if they can rollback to archived revisions when necessary
● A possible future direction:
○ Focus less on avoiding/preventing conflicts…
○ Focus instead on ability to rollback to past revisions…
CASE IN POINT: HUMANS
29
● Do we need point-in-time consistency for file systems?
● Where does the consistency requirement come in?
BACK TO APPLICATIONS
30
MIGRATION: STOP, MOVE, START
time
● App runs in site A
● Stop app in site A
● Copy data A→B
● Start app in site B
● App maintains exclusive access
● Long service disruption
SITE A SITE B
❏ Multi-tier
✓ Mobility
❏ DR
❏ Stretch
❏ Edge
31
MIGRATION: PRESTAGING
time
● App runs in site A
● Copy most data from A→B
● Stop app in site A
● Copy last little bit A→B
● Start app in site B
● App maintains exclusive access
● Short availability blip
SITE A SITE B
32
MIGRATION: TEMPORARY ACTIVE/ACTIVE
● App runs in site A
● Copy most data from A→B
● Enable bidirectional replication
● Start app in site B
● Stop app in site A
● Disable replication
● No loss of availability
● Concurrent access to same data
SITE A SITE B
time
33
ACTIVE/ACTIVE
● App runs in site A
● Copy most data from A→B
● Enable bidirectional replication
● Start app in site B
● Highly available across two sites
● Concurrent access to same data
SITE A SITE B
time
34
● We don’t have general-purpose bidirectional file replication
● It is hard to resolve conflicts for any POSIX operation
○ Sites A and B both modify the same file
○ Site A renames /a → /b/a while Site B: renames /b → /a/b
● But applications can only go active/active if they are cooperative
○ i.e., they carefully avoid such conflicts
○ e.g., mostly-static directory structure + last writer wins
● So we could do it if we simplify the data model...
● But wait, that sounds a bit like object storage...
BIDIRECTIONAL FILE REPLICATION?
35
OBJECT STORAGE
36
WHY IS OBJECT SO GREAT?
● Based on HTTP
○ Interoperates well with web caches, proxies, CDNs, ...
● Atomic object replacement
○ PUT on a large object atomically replaces prior version
○ Trivial conflict resolution (last writer wins)
○ Lack of overwrites makes erasure coding easy
● Flat namespace
○ No multi-step traversal to find your data
○ Easy to scale horizontally
● No rename
○ Vastly simplified implementation
37
● File is not going away, and will be critical
○ Half a century of legacy applications
○ It’s genuinely useful
● Block is not going away, and is also critical infrastructure
○ Well suited for exclusive-access storage users (boot devices, etc)
○ Performs better than file due to local consistency management, ordering etc.
● Most new data will land in objects
○ Cat pictures, surveillance video, telemetry, medical imaging, genome data
○ Next generation of cloud native applications will be architected around object
THE FUTURE IS… OBJECTY
38
RGW FEDERATION MODEL
● Zone
○ Collection RADOS pools storing data
○ Set of RGW daemons serving that content
● ZoneGroup
○ Collection of Zones with a replication relationship
○ Active/Passive[/…] or Active/Active
● Namespace
○ Independent naming for users and buckets
○ All ZoneGroups and Zones replicate user and
bucket index pool
○ One Zone serves as the leader to handle User and
Bucket creations/deletions
● Failover is driven externally
○ Human (?) operators decide when to write off a
master, resynchronize
Namespace
ZoneGroup
Zone
39
RGW FEDERATION TODAY
CEPH CLUSTER 1 CEPH CLUSTER 2 CEPH CLUSTER 3
RGW ZONE M
ZONEGROUP Y
RGW ZONE Y-A RGW ZONE Y-B
ZONEGROUP X
RGW ZONE X-A RGW ZONE X-B RGW ZONE N
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
● Gap: granular, per-bucket management of replication
40
ACTIVE/ACTIVE FILE ON OBJECT
CEPH CLUSTER A
RGW ZONE A
CEPH CLUSTER B
RGW ZONE B
● Data in replicated object zones
○ Eventually consistent, last writer wins
● Applications access RGW via NFSv4
NFSv4 NFSv4
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
41
● ElasticSearch
○ Index entire zone by object or user metadata
○ Query API
● Cloud sync (Mimic)
○ Replicate buckets to external object store (e.g., S3)
○ Can remap RGW buckets into multiple S3 bucket, same S3 bucket
○ Remaps ACLs, etc
● Archive (Nautilus)
○ Replicate all writes in one zone to another zone, preserving all versions
● Pub/sub (Nautilus)
○ Subscribe to event notifications for actions like PUT
○ Integrates with knative serverless! (See Huamin and Yehuda’s talk at Kubecon next month)
OTHER RGW REPLICATION PLUGINS
42
PUBLIC CLOUD STORAGE IN THE MESH
CEPH ON-PREM CLUSTER
CEPH BEACHHEAD CLUSTER
RGW ZONE B RGW GATEWAY ZONE
CLOUD OBJECT STORE
● Mini Ceph cluster in cloud as gateway
○ Stores federation and replication state
○ Gateway for GETs and PUTs, or
○ Clients can access cloud object storage directly
● Today: replicate to cloud
● Future: replicate from cloud
43
Today: Intra-cluster
● Many RADOS pools for a single RGW zone
● Primary RADOS pool for object “heads”
○ Single (fast) pool to find object metadata
and location of the tail of the object
● Each tail can go in a different pool
○ Specify bucket policy with PUT
○ Per-bucket policy as default when not
specified
● Policy
○ Retention (auto-expire)
RGW TIERING
Nautilus
● Tier objects to an external store
○ Initially something like S3
○ Later: tape backup, other backends…
Later
● Encrypt data in external tier
● Compression
● (Maybe) cryptographically shard across
multiple backend tiers
● Policy for moving data between tiers
✓ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
❏ Edge
44
Today
● RGW as gateway to a RADOS cluster
○ With some nifty geo-replication features
● RGW redirects clients to the correct zone
○ via HTTP Location: redirect
○ Dynamic DNS can provide right zone IPs
● RGW replicates at zone granularity
○ Well suited for disaster recovery
RGW - THE BIG PICTURE
Future
● RGW as a gateway to a mesh of sites
○ With great on-site performance
● RGW may redirect or proxy to right zone
○ Single point of access for application
○ Proxying enables coherent local caching
● RGW may replicate at bucket granularity
○ Individual applications set durability needs
○ Enable granular application mobility
45 CEPH AT THE THE EDGE
46
CEPH AT THE EDGE
● A few edge examples
○ Telco POPs: ¼ - ½ rack of OpenStack
○ Autonomous vehicles: cars or drones
○ Retail
○ Backpack infrastructure
● Scale down cluster size
○ Hyper-converge storage and compute
○ Nautilus: brings better memory control
● Multi-architecture support
○ aarch64 (ARM) builds upstream
○ POWER builds at OSU / OSL
● Hands-off operation
○ Ongoing usability work
○ Operator-based provisioning (Rook)
● Possibly unreliable WAN links
Control Plane,
Compute / Storage
Compute
Nodes
Compute
Nodes
Compute
Nodes
Central Site
Site1
Site2
Site3
47
● Block: async mirror edge volumes to central site
○ For DR purposes
● Data producers
○ Write generated data into objects in local RGW zone
○ Upload to central site when connectivity allows
○ Perhaps with some local pre-processing first
● Data consumers
○ Access to global data set via RGW (as a “mesh gateway”)
○ Local caching of a subset of the data
● We’re most interested in object-based edge scenarios
DATA AT THE EDGE
❏ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
✓ Edge
48
KUBERNETES
49
WHY ALL THE KUBERNETES TALK?
● True mobility is a partnership between orchestrator and storage
● Kubernetes is an emerging leader in application orchestration
● Persistent Volumes
○ Basic Ceph drivers in Kubernetes, ceph-csi on the way
○ Rook for automating Ceph cluster deployment and operation, hyperconverged
● Object
○ Trivial provisioning of RGW via Rook
○ Coming soon: on-demand, dynamic provisioning of Object Buckets and User (via Rook)
○ Consistent developer experience across different object backends (RGW, S3, minio, etc.)
50
BRINGING IT ALL TOGETHER...
51
SUMMARY
● Data services: mobility, introspection, policy
● These are a partnership between storage layer and application orchestrator
● Ceph already has several key multi-cluster capabilities
○ Block mirroring
○ Object federation, replication, cloud sync; cloud tiering, archiving and pub/sub coming
○ Cover elements of Tiering, Disaster Recovery, Mobility, Stretch, Edge scenarios
● ...and introspection (elasticsearch) and policy for object
● Future investment is primarily focused on object
○ RGW as a gateway to a federated network of storage sites
○ Policy driving placement, migration, etc.
● Kubernetes will play an important role
○ both for infrastructure operators and applications developers
52
THANK YOU
https://ceph.io/
sage@redhat.com
@liewegas

Contenu connexe

Tendances

Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephProtecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephSean Cohen
 
Paul Angus – Backup & Recovery in CloudStack
Paul Angus – Backup & Recovery in CloudStackPaul Angus – Backup & Recovery in CloudStack
Paul Angus – Backup & Recovery in CloudStackShapeBlue
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingShapeBlue
 
OpenStack DRaaS - Freezer - 101
OpenStack DRaaS - Freezer - 101OpenStack DRaaS - Freezer - 101
OpenStack DRaaS - Freezer - 101Trinath Somanchi
 
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsApache Ignite vs Alluxio: Memory Speed Big Data Analytics
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsDataWorks Summit
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
[오픈소스컨설팅]Zabbix Installation and Configuration Guide
[오픈소스컨설팅]Zabbix Installation and Configuration Guide[오픈소스컨설팅]Zabbix Installation and Configuration Guide
[오픈소스컨설팅]Zabbix Installation and Configuration GuideJi-Woong Choi
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
 
[242]open stack neutron dataplane 구현
[242]open stack neutron   dataplane 구현[242]open stack neutron   dataplane 구현
[242]open stack neutron dataplane 구현NAVER D2
 
Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...
Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...
Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...Vietnam Open Infrastructure User Group
 
[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep DiveAkihiro Suda
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
 
Volume Encryption In CloudStack
Volume Encryption In CloudStackVolume Encryption In CloudStack
Volume Encryption In CloudStackShapeBlue
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
Paul Angus - CloudStack Backup and Recovery Framework
Paul Angus - CloudStack Backup and Recovery FrameworkPaul Angus - CloudStack Backup and Recovery Framework
Paul Angus - CloudStack Backup and Recovery FrameworkShapeBlue
 
Page cache in Linux kernel
Page cache in Linux kernelPage cache in Linux kernel
Page cache in Linux kernelAdrian Huang
 
VPC Implementation In OpenStack Heat
VPC Implementation In OpenStack HeatVPC Implementation In OpenStack Heat
VPC Implementation In OpenStack HeatSaju Madhavan
 

Tendances (20)

Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephProtecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
 
Paul Angus – Backup & Recovery in CloudStack
Paul Angus – Backup & Recovery in CloudStackPaul Angus – Backup & Recovery in CloudStack
Paul Angus – Backup & Recovery in CloudStack
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and Troubleshooting
 
OpenStack DRaaS - Freezer - 101
OpenStack DRaaS - Freezer - 101OpenStack DRaaS - Freezer - 101
OpenStack DRaaS - Freezer - 101
 
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsApache Ignite vs Alluxio: Memory Speed Big Data Analytics
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 
Meetup 23 - 02 - OVN - The future of networking in OpenStack
Meetup 23 - 02 - OVN - The future of networking in OpenStackMeetup 23 - 02 - OVN - The future of networking in OpenStack
Meetup 23 - 02 - OVN - The future of networking in OpenStack
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
[오픈소스컨설팅]Zabbix Installation and Configuration Guide
[오픈소스컨설팅]Zabbix Installation and Configuration Guide[오픈소스컨설팅]Zabbix Installation and Configuration Guide
[오픈소스컨설팅]Zabbix Installation and Configuration Guide
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
 
[242]open stack neutron dataplane 구현
[242]open stack neutron   dataplane 구현[242]open stack neutron   dataplane 구현
[242]open stack neutron dataplane 구현
 
Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...
Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...
Room 1 - 4 - Phạm Tường Chiến & Trần Văn Thắng - Deliver managed Kubernetes C...
 
[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent Memory
 
Volume Encryption In CloudStack
Volume Encryption In CloudStackVolume Encryption In CloudStack
Volume Encryption In CloudStack
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Paul Angus - CloudStack Backup and Recovery Framework
Paul Angus - CloudStack Backup and Recovery FrameworkPaul Angus - CloudStack Backup and Recovery Framework
Paul Angus - CloudStack Backup and Recovery Framework
 
Page cache in Linux kernel
Page cache in Linux kernelPage cache in Linux kernel
Page cache in Linux kernel
 
VPC Implementation In OpenStack Heat
VPC Implementation In OpenStack HeatVPC Implementation In OpenStack Heat
VPC Implementation In OpenStack Heat
 

Similaire à Ceph data services in a multi- and hybrid cloud world

Cloud storage: the right way OSS EU 2018
Cloud storage: the right way OSS EU 2018Cloud storage: the right way OSS EU 2018
Cloud storage: the right way OSS EU 2018Orit Wasserman
 
Initial presentation of swift (for montreal user group)
Initial presentation of swift (for montreal user group)Initial presentation of swift (for montreal user group)
Initial presentation of swift (for montreal user group)Marcos García
 
Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2aspyker
 
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph Ceph Community
 
Ceph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's CephCeph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's CephCeph Community
 
Automation of Hadoop cluster operations in Arm Treasure Data
Automation of Hadoop cluster operations in Arm Treasure DataAutomation of Hadoop cluster operations in Arm Treasure Data
Automation of Hadoop cluster operations in Arm Treasure DataYan Wang
 
Storing your data in the cloud: doing right reversim 2018
Storing your data in the cloud: doing right reversim 2018Storing your data in the cloud: doing right reversim 2018
Storing your data in the cloud: doing right reversim 2018Orit Wasserman
 
Rook: Storage for Containers in Containers – data://disrupted® 2020
Rook: Storage for Containers in Containers  – data://disrupted® 2020Rook: Storage for Containers in Containers  – data://disrupted® 2020
Rook: Storage for Containers in Containers – data://disrupted® 2020data://disrupted®
 
Webinar: Building a multi-cloud Kubernetes storage on GitLab
Webinar: Building a multi-cloud Kubernetes storage on GitLabWebinar: Building a multi-cloud Kubernetes storage on GitLab
Webinar: Building a multi-cloud Kubernetes storage on GitLabMayaData Inc
 
London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph Ceph Community
 
YugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on KubernetesYugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on KubernetesDoKC
 
Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)
Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)
Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)Rick Hwang
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfClyso GmbH
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongPatrick McGarry
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
 
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB AtlasMongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB AtlasMongoDB
 
Ceph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Community
 
LDAP at Lightning Speed
 LDAP at Lightning Speed LDAP at Lightning Speed
LDAP at Lightning SpeedC4Media
 

Similaire à Ceph data services in a multi- and hybrid cloud world (20)

Cloud storage: the right way OSS EU 2018
Cloud storage: the right way OSS EU 2018Cloud storage: the right way OSS EU 2018
Cloud storage: the right way OSS EU 2018
 
Initial presentation of swift (for montreal user group)
Initial presentation of swift (for montreal user group)Initial presentation of swift (for montreal user group)
Initial presentation of swift (for montreal user group)
 
Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2
 
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
 
Ceph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's CephCeph Day NYC: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's Ceph
 
Automation of Hadoop cluster operations in Arm Treasure Data
Automation of Hadoop cluster operations in Arm Treasure DataAutomation of Hadoop cluster operations in Arm Treasure Data
Automation of Hadoop cluster operations in Arm Treasure Data
 
Storing your data in the cloud: doing right reversim 2018
Storing your data in the cloud: doing right reversim 2018Storing your data in the cloud: doing right reversim 2018
Storing your data in the cloud: doing right reversim 2018
 
Rook: Storage for Containers in Containers – data://disrupted® 2020
Rook: Storage for Containers in Containers  – data://disrupted® 2020Rook: Storage for Containers in Containers  – data://disrupted® 2020
Rook: Storage for Containers in Containers – data://disrupted® 2020
 
Webinar: Building a multi-cloud Kubernetes storage on GitLab
Webinar: Building a multi-cloud Kubernetes storage on GitLabWebinar: Building a multi-cloud Kubernetes storage on GitLab
Webinar: Building a multi-cloud Kubernetes storage on GitLab
 
London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph London Ceph Day Keynote: Building Tomorrow's Ceph
London Ceph Day Keynote: Building Tomorrow's Ceph
 
YugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on KubernetesYugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on Kubernetes
 
Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)
Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)
Study Notes - Architecting for the cloud (AWS Best Practices, Feb 2016)
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB AtlasMongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
 
RubiX
RubiXRubiX
RubiX
 
Ceph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade in
 
LDAP at Lightning Speed
 LDAP at Lightning Speed LDAP at Lightning Speed
LDAP at Lightning Speed
 

Plus de Sage Weil

Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and BeyondSage Weil
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonSage Weil
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and BeyondSage Weil
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDBSage Weil
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersSage Weil
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 

Plus de Sage Weil (13)

Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and Beyond
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit Boston
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and Beyond
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containers
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 

Dernier

10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024Mind IT Systems
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburgmasabamasaba
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrainmasabamasaba
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
Generic or specific? Making sensible software design decisions
Generic or specific? Making sensible software design decisionsGeneric or specific? Making sensible software design decisions
Generic or specific? Making sensible software design decisionsBert Jan Schrijver
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...masabamasaba
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfproinshot.com
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...Shane Coughlan
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdfPearlKirahMaeRagusta1
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension AidPhilip Schwarz
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...masabamasaba
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park masabamasaba
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfonteinmasabamasaba
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park masabamasaba
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfonteinmasabamasaba
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplatePresentation.STUDIO
 

Dernier (20)

10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Generic or specific? Making sensible software design decisions
Generic or specific? Making sensible software design decisionsGeneric or specific? Making sensible software design decisions
Generic or specific? Making sensible software design decisions
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 

Ceph data services in a multi- and hybrid cloud world

  • 1. 1 CEPH DATA SERVICES IN A MULTI- AND HYBRID CLOUD WORLD Sage Weil - Red Hat OpenStack Summit - 2018.11.15
  • 2. 2 ● Ceph ● Data services ● Block ● File ● Object ● Edge ● Future OUTLINE
  • 3. 3 UNIFIED STORAGE PLATFORM RGW S3 and Swift object storage LIBRADOS Low-level storage API RADOS Reliable, elastic, highly-available distributed storage layer with replication and erasure coding RBD Virtual block device with robust feature set CEPHFS Distributed network file system OBJECT BLOCK FILE
  • 4. 4 RELEASE SCHEDULE 12.2.z 13.2.z Luminous Aug 2017 Mimic May 2018 WE ARE HERE ● Stable, named release every 9 months ● Backports for 2 releases ● Upgrade up to 2 releases at a time ● (e.g., Luminous → Nautilus, Mimic → Octopus) 14.2.z Nautilus Feb 2019 15.2.z Octopus Nov 2019
  • 5. 5 FOUR CEPH PRIORITIES Usability and management Performance Container platforms Multi- and hybrid cloud
  • 7. 7 A CLOUDY FUTURE ● IT organizations today ○ Multiple private data centers ○ Multiple public cloud services ● It’s getting cloudier ○ “On premise” → private cloud ○ Self-service IT resources, provisioned on demand by developers and business units ● Next generation of cloud-native applications will span clouds ● “Stateless microservices” are great, but real applications have state.
  • 8. 8 ● Data placement and portability ○ Where should I store this data? ○ How can I move this data set to a new tier or new site? ○ Seamlessly, without interrupting applications? ● Introspection ○ What data am I storing? For whom? Where? For how long? ○ Search, metrics, insights ● Policy-driven data management ○ Lifecycle management ○ Conformance: constrain placement, retention, etc. (e.g., HIPAA, GPDR) ○ Optimize placement based on cost or performance ○ Automation DATA SERVICES
  • 9. 9 ● Data sets are tied to applications ○ When the data moves, the application often should (or must) move too ● Container platforms are key ○ Automated application (re)provisioning ○ “Operators” to manage coordinated migration of state and applications that consume it MORE THAN JUST DATA
  • 10. 10 ● Multi-tier ○ Different storage for different data ● Mobility ○ Move an application and its data between sites with minimal (or no) availability interruption ○ Maybe an entire site, but usually a small piece of a site ● Disaster recovery ○ Tolerate a site-wide failure; reinstantiate data and app in a new site quickly ○ Point-in-time consistency with bounded latency (bounded data loss) ● Stretch ○ Tolerate site outage without compromising data availability ○ Synchronous replication (no data loss) or async replication (different consistency model) ● Edge ○ Small (e.g., telco POP) and/or semi-connected sites (e.g., autonomous vehicle) DATA USE SCENARIOS
  • 12. 12 HOW WE USE BLOCK ● Virtual disk device ● Exclusive access by nature (with few exceptions) ● Strong consistency required ● Performance sensitive ● Basic feature set ○ Read, write, flush, maybe resize ○ Snapshots (read-only) or clones (read/write) ■ Point-in-time consistent ● Often self-service provisioning ○ via Cinder in OpenStack ○ via Persistent Volume (PV) abstraction in Kubernetes Block device XFS, ext4, whatever Applications
  • 13. 13 RBD - TIERING WITH RADOS POOLS CEPH STORAGE CLUSTER SSD 2x POOL HDD 3x POOL SSD EC 6+3 POOL FSFS KRBD librbd ✓ Multi-tier ❏ Mobility ❏ DR ❏ Stretch ❏ Edge KVM
  • 14. 14 RBD - LIVE IMAGE MIGRATION CEPH STORAGE CLUSTER SSD 2x POOL HDD 3x POOL SSD EC 6+3 POOL FSFS KRBD librbd ✓ Multi-tier ✓ Mobility ❏ DR ❏ Stretch ❏ Edge KVM ● New in Nautilus
  • 15. 15 SITE BSITE A RBD - STRETCH STRETCH CEPH STORAGE CLUSTER STRETCH POOL FS KRBD WAN link ❏ Multi-tier ❏ Mobility ✓ DR ✓ Stretch ❏ Edge ● Apps can move ● Data can’t - it’s everywhere ● Performance is compromised ○ Need fat and low latency pipes
  • 16. 16 SITE BSITE A RBD - STRETCH WITH TIERS STRETCH CEPH STORAGE CLUSTER STRETCH POOL FS KRBD WAN link ✓ Multi-tier ❏ Mobility ✓ DR ✓ Stretch ❏ Edge ● Create site-local pools for performance sensitive apps A POOL B POOL
  • 17. 17 SITE BSITE A RBD - STRETCH WITH MIGRATION STRETCH CEPH STORAGE CLUSTER STRETCH POOL WAN link ✓ Multi-tier ✓ Mobility ✓ DR ✓ Stretch ❏ Edge ● Live migrate images between pools ● Maybe even live migrate your app VM? A POOL B POOL FS KRBD
  • 18. 18 ● Network latency is critical ○ Low latency for performance ○ Requires nearby sites, limiting usefulness ● Bandwidth too ○ Must be able to sustain rebuild data rates ● Relatively inflexible ○ Single cluster spans all locations ○ Cannot “join” existing clusters ● High level of coupling ○ Single (software) failure domain for all sites STRETCH IS SKETCH
  • 19. 19 RBD ASYNC MIRRORING CEPH CLUSTER A SSD 3x POOL PRIMARY CEPH CLUSTER B HDD 3x POOL BACKUP WAN link Asynchronous mirroring FS librbd ● Asynchronously mirror writes ● Small performance overhead at primary ○ Mitigate with SSD pool for RBD journal ● Configurable time delay for backup KVM
  • 20. 20 ● On primary failure ○ Backup is point-in-time consistent ○ Lose only last few seconds of writes ○ VM can restart in new site ● If primary recovers, ○ Option to resync and “fail back” RBD ASYNC MIRRORING CEPH CLUSTER A SSD 3x POOL CEPH CLUSTER B HDD 3x POOL WAN link Asynchronous mirroring FS librbd DIVERGENT PRIMARY ❏ Multi-tier ❏ Mobility ✓ DR ❏ Stretch ❏ Edge KVM
  • 21. 21 ● Ocata ○ Cinder RBD replication driver ● Queens ○ ceph-ansible deployment of rbd-mirror via TripleO ● Rocky ○ Failover and fail-back operations ● Gaps ○ Deployment and configuration tooling ○ Cannot replicate multi-attach volumes ○ Nova attachments are lost on failover RBD MIRRORING IN CINDER
  • 22. 22 ● Hard for IaaS layer to reprovision app in new site ● Storage layer can’t solve it on its own either ● Need automated, declarative, structured specification for entire app stack... MISSING LINK: APPLICATION ORCHESTRATION
  • 24. 24 ● Stable since Kraken ● Multi-MDS stable since Luminous ● Snapshots stable since Mimic ● Support for multiple RADOS data pools ● Provisioning via OpenStack Manila and Kubernetes ● Fully awesome CEPHFS STATUS ✓ Multi-tier ❏ Mobility ❏ DR ❏ Stretch ❏ Edge
  • 25. 25 ● We can stretch CephFS just like RBD pools ● It has the same limitations as RBD ○ Latency → lower performance ○ Limited by geography ○ Big (software) failure domain ● Also, ○ MDS latency is critical for file workloads ○ ceph-mds daemons be running in one site or another ● What can we do with CephFS across multiple clusters? CEPHFS - STRETCH? ❏ Multi-tier ❏ Mobility ✓ DR ✓ Stretch ❏ Edge
  • 26. 26 7. A: create snap S3 ● CephFS snapshots provide ○ point-in-time consistency ○ granularity (any directory in the system) ● CephFS rstats provide ○ rctime to efficiently find changes ● rsync provides ○ efficient file transfer ● Time bounds on order of minutes ● Gaps and TODO ○ “rstat flush” coming in Nautilus ■ Xuehan Xu @ Qihoo 360 ○ rsync support for CephFS rstats ○ scripting / tooling CEPHFS - SNAP MIRRORING ❏ Multi-tier ❏ Mobility ✓ DR ❏ Stretch ❏ Edge time 1. A: create snap S1 2. rsync A→B 3. B: create snap S1 4. A: create snap S2 5. rsync A→B 6. B: create S2 8. rsync A→B 9. B: create S3 S1 S2 S3 S1 S2 SITE A SITE B
  • 27. 27 ● Yes. ● Sometimes. ● Some geo-replication DR features are built on rsync... ○ Consistent view of individual files, ○ Lack point-in-time consistency between files ● Some (many?) applications are not picky about cross-file consistency... ○ Content stores ○ Casual usage without multi-site modification of the same files DO WE NEED POINT-IN-TIME FOR FILE?
  • 28. 28 ● Many humans love Dropbox / NextCloud / etc. ○ Ad hoc replication of directories to any computer ○ Archive of past revisions of every file ○ Offline access to files is extremely convenient and fast ● Disconnected operation and asynchronous replication leads to conflicts ○ Usually a pop-up in GUI ● Automated conflict resolution is usually good enough ○ e.g., newest timestamp wins ○ Humans are happy if they can rollback to archived revisions when necessary ● A possible future direction: ○ Focus less on avoiding/preventing conflicts… ○ Focus instead on ability to rollback to past revisions… CASE IN POINT: HUMANS
  • 29. 29 ● Do we need point-in-time consistency for file systems? ● Where does the consistency requirement come in? BACK TO APPLICATIONS
  • 30. 30 MIGRATION: STOP, MOVE, START time ● App runs in site A ● Stop app in site A ● Copy data A→B ● Start app in site B ● App maintains exclusive access ● Long service disruption SITE A SITE B ❏ Multi-tier ✓ Mobility ❏ DR ❏ Stretch ❏ Edge
  • 31. 31 MIGRATION: PRESTAGING time ● App runs in site A ● Copy most data from A→B ● Stop app in site A ● Copy last little bit A→B ● Start app in site B ● App maintains exclusive access ● Short availability blip SITE A SITE B
  • 32. 32 MIGRATION: TEMPORARY ACTIVE/ACTIVE ● App runs in site A ● Copy most data from A→B ● Enable bidirectional replication ● Start app in site B ● Stop app in site A ● Disable replication ● No loss of availability ● Concurrent access to same data SITE A SITE B time
  • 33. 33 ACTIVE/ACTIVE ● App runs in site A ● Copy most data from A→B ● Enable bidirectional replication ● Start app in site B ● Highly available across two sites ● Concurrent access to same data SITE A SITE B time
  • 34. 34 ● We don’t have general-purpose bidirectional file replication ● It is hard to resolve conflicts for any POSIX operation ○ Sites A and B both modify the same file ○ Site A renames /a → /b/a while Site B: renames /b → /a/b ● But applications can only go active/active if they are cooperative ○ i.e., they carefully avoid such conflicts ○ e.g., mostly-static directory structure + last writer wins ● So we could do it if we simplify the data model... ● But wait, that sounds a bit like object storage... BIDIRECTIONAL FILE REPLICATION?
  • 36. 36 WHY IS OBJECT SO GREAT? ● Based on HTTP ○ Interoperates well with web caches, proxies, CDNs, ... ● Atomic object replacement ○ PUT on a large object atomically replaces prior version ○ Trivial conflict resolution (last writer wins) ○ Lack of overwrites makes erasure coding easy ● Flat namespace ○ No multi-step traversal to find your data ○ Easy to scale horizontally ● No rename ○ Vastly simplified implementation
  • 37. 37 ● File is not going away, and will be critical ○ Half a century of legacy applications ○ It’s genuinely useful ● Block is not going away, and is also critical infrastructure ○ Well suited for exclusive-access storage users (boot devices, etc) ○ Performs better than file due to local consistency management, ordering etc. ● Most new data will land in objects ○ Cat pictures, surveillance video, telemetry, medical imaging, genome data ○ Next generation of cloud native applications will be architected around object THE FUTURE IS… OBJECTY
  • 38. 38 RGW FEDERATION MODEL ● Zone ○ Collection RADOS pools storing data ○ Set of RGW daemons serving that content ● ZoneGroup ○ Collection of Zones with a replication relationship ○ Active/Passive[/…] or Active/Active ● Namespace ○ Independent naming for users and buckets ○ All ZoneGroups and Zones replicate user and bucket index pool ○ One Zone serves as the leader to handle User and Bucket creations/deletions ● Failover is driven externally ○ Human (?) operators decide when to write off a master, resynchronize Namespace ZoneGroup Zone
  • 39. 39 RGW FEDERATION TODAY CEPH CLUSTER 1 CEPH CLUSTER 2 CEPH CLUSTER 3 RGW ZONE M ZONEGROUP Y RGW ZONE Y-A RGW ZONE Y-B ZONEGROUP X RGW ZONE X-A RGW ZONE X-B RGW ZONE N ❏ Multi-tier ❏ Mobility ✓ DR ✓ Stretch ❏ Edge ● Gap: granular, per-bucket management of replication
  • 40. 40 ACTIVE/ACTIVE FILE ON OBJECT CEPH CLUSTER A RGW ZONE A CEPH CLUSTER B RGW ZONE B ● Data in replicated object zones ○ Eventually consistent, last writer wins ● Applications access RGW via NFSv4 NFSv4 NFSv4 ❏ Multi-tier ❏ Mobility ✓ DR ✓ Stretch ❏ Edge
  • 41. 41 ● ElasticSearch ○ Index entire zone by object or user metadata ○ Query API ● Cloud sync (Mimic) ○ Replicate buckets to external object store (e.g., S3) ○ Can remap RGW buckets into multiple S3 bucket, same S3 bucket ○ Remaps ACLs, etc ● Archive (Nautilus) ○ Replicate all writes in one zone to another zone, preserving all versions ● Pub/sub (Nautilus) ○ Subscribe to event notifications for actions like PUT ○ Integrates with knative serverless! (See Huamin and Yehuda’s talk at Kubecon next month) OTHER RGW REPLICATION PLUGINS
  • 42. 42 PUBLIC CLOUD STORAGE IN THE MESH CEPH ON-PREM CLUSTER CEPH BEACHHEAD CLUSTER RGW ZONE B RGW GATEWAY ZONE CLOUD OBJECT STORE ● Mini Ceph cluster in cloud as gateway ○ Stores federation and replication state ○ Gateway for GETs and PUTs, or ○ Clients can access cloud object storage directly ● Today: replicate to cloud ● Future: replicate from cloud
  • 43. 43 Today: Intra-cluster ● Many RADOS pools for a single RGW zone ● Primary RADOS pool for object “heads” ○ Single (fast) pool to find object metadata and location of the tail of the object ● Each tail can go in a different pool ○ Specify bucket policy with PUT ○ Per-bucket policy as default when not specified ● Policy ○ Retention (auto-expire) RGW TIERING Nautilus ● Tier objects to an external store ○ Initially something like S3 ○ Later: tape backup, other backends… Later ● Encrypt data in external tier ● Compression ● (Maybe) cryptographically shard across multiple backend tiers ● Policy for moving data between tiers ✓ Multi-tier ❏ Mobility ❏ DR ❏ Stretch ❏ Edge
  • 44. 44 Today ● RGW as gateway to a RADOS cluster ○ With some nifty geo-replication features ● RGW redirects clients to the correct zone ○ via HTTP Location: redirect ○ Dynamic DNS can provide right zone IPs ● RGW replicates at zone granularity ○ Well suited for disaster recovery RGW - THE BIG PICTURE Future ● RGW as a gateway to a mesh of sites ○ With great on-site performance ● RGW may redirect or proxy to right zone ○ Single point of access for application ○ Proxying enables coherent local caching ● RGW may replicate at bucket granularity ○ Individual applications set durability needs ○ Enable granular application mobility
  • 45. 45 CEPH AT THE THE EDGE
  • 46. 46 CEPH AT THE EDGE ● A few edge examples ○ Telco POPs: ¼ - ½ rack of OpenStack ○ Autonomous vehicles: cars or drones ○ Retail ○ Backpack infrastructure ● Scale down cluster size ○ Hyper-converge storage and compute ○ Nautilus: brings better memory control ● Multi-architecture support ○ aarch64 (ARM) builds upstream ○ POWER builds at OSU / OSL ● Hands-off operation ○ Ongoing usability work ○ Operator-based provisioning (Rook) ● Possibly unreliable WAN links Control Plane, Compute / Storage Compute Nodes Compute Nodes Compute Nodes Central Site Site1 Site2 Site3
  • 47. 47 ● Block: async mirror edge volumes to central site ○ For DR purposes ● Data producers ○ Write generated data into objects in local RGW zone ○ Upload to central site when connectivity allows ○ Perhaps with some local pre-processing first ● Data consumers ○ Access to global data set via RGW (as a “mesh gateway”) ○ Local caching of a subset of the data ● We’re most interested in object-based edge scenarios DATA AT THE EDGE ❏ Multi-tier ❏ Mobility ❏ DR ❏ Stretch ✓ Edge
  • 49. 49 WHY ALL THE KUBERNETES TALK? ● True mobility is a partnership between orchestrator and storage ● Kubernetes is an emerging leader in application orchestration ● Persistent Volumes ○ Basic Ceph drivers in Kubernetes, ceph-csi on the way ○ Rook for automating Ceph cluster deployment and operation, hyperconverged ● Object ○ Trivial provisioning of RGW via Rook ○ Coming soon: on-demand, dynamic provisioning of Object Buckets and User (via Rook) ○ Consistent developer experience across different object backends (RGW, S3, minio, etc.)
  • 50. 50 BRINGING IT ALL TOGETHER...
  • 51. 51 SUMMARY ● Data services: mobility, introspection, policy ● These are a partnership between storage layer and application orchestrator ● Ceph already has several key multi-cluster capabilities ○ Block mirroring ○ Object federation, replication, cloud sync; cloud tiering, archiving and pub/sub coming ○ Cover elements of Tiering, Disaster Recovery, Mobility, Stretch, Edge scenarios ● ...and introspection (elasticsearch) and policy for object ● Future investment is primarily focused on object ○ RGW as a gateway to a federated network of storage sites ○ Policy driving placement, migration, etc. ● Kubernetes will play an important role ○ both for infrastructure operators and applications developers