SlideShare une entreprise Scribd logo
1  sur  52
Building Tomorrow's Ceph
Sage Weil
Research beginnings

9
UCSC research grant
●

“Petascale object storage”
●

DOE: LANL, LLNL, Sandia

●

Scalability

●

Reliability

●

Performance
●

●

Raw IO bandwidth, metadata ops/sec

HPC file system workloads
●

Thousands of clients writing to same file, directory
Distributed metadata management
●

Innovative design
●

Subtree-based partitioning for locality, efficiency

●

Dynamically adapt to current workload

●

Embedded inodes

●

Prototype simulator in Java (2004)

●

First line of Ceph code
●

Summer internship at LLNL

●

High security national lab environment

●

Could write anything, as long as it was OSS
The rest of Ceph
●

RADOS – distributed object storage cluster (2005)

●

EBOFS – local object storage (2004/2006)

●

CRUSH – hashing for the real world (2005)

●

Paxos monitors – cluster consensus (2006)
→ emphasis on consistent, reliable storage
→ scale by pushing intelligence to the edges
→ a different but compelling architecture
Industry black hole
●

Many large storage vendors
●

●

Proprietary solutions that don't scale well

Few open source alternatives (2006)
●

●

Limited community and architecture (Lustre)

●

●

Very limited scale, or
No enterprise feature sets (snapshots, quotas)

PhD grads all built interesting systems...
●

●

...and then went to work for Netapp, DDN, EMC, Veritas.

They want you, not your project
A different path
●

Change the world with open source
●

●

●

Do what Linux did to Solaris, Irix, Ultrix, etc.
What could go wrong?

License
●

●

●

GPL, BSD...
LGPL: share changes, okay to link to proprietary code

Avoid unsavory practices
●

Dual licensing

●

Copyright assignment
Incubation

17
DreamHost!
●

Move back to LA, continue hacking

●

Hired a few developers

●

Pure development

●

No deliverables
Ambitious feature set
●

Native Linux kernel client (2007-)

●

Per-directory snapshots (2008)

●

Recursive accounting (2008)

●

Object classes (2009)

●

librados (2009)

●

radosgw (2009)

●

strong authentication (2009)

●

RBD: rados block device (2010)
The kernel client
●

ceph-fuse was limited, not very fast

●

Build native Linux kernel implementation

●

Began attending Linux file system developer events (LSF)
●

●

●

Early words of encouragement from ex-Lustre devs
Engage Linux fs developer community as peer

Initial attempts merge rejected by Linus
●

●

●

Not sufficient evidence of user demand
A few fans and would-be users chimed in...

Eventually merged for v2.6.34 (early 2010)
Part of a larger ecosystem
●

Ceph need not solve all problems as monolithic stack

●

Replaced ebofs object file system with btrfs
●

●

Avoid reinventing the wheel

●

Robust, well-supported, well optimized

●

Kernel-level cache management

●

●

Same design goals

Copy-on-write, checksumming, other goodness

Contributed some early functionality
●

Cloning files

●

Async snapshots
Budding community
●

#ceph on irc.oftc.net, ceph-devel@vger.kernel.org

●

Many interested users

●

A few developers

●

Many fans

●

Too unstable for any real deployments

●

Still mostly focused on right architecture and technical
solutions
Road to product
●

●

DreamHost decides to build an S3-compatible object
storage service with Ceph
Stability
●

●

Focus on core RADOS, RBD, radosgw

Paying back some technical debt
●

●

●

Build testing automation
Code review!

Expand engineering team
The reality
●

Growing incoming commercial interest
●

Early attempts from organizations large and small

●

Difficult to engage with a web hosting company

●

No means to support commercial deployments

●

Project needed a company to back it
●

●

Build and test a product

●

●

Fund the engineering effort
Support users

Bryan built a framework to spin out of DreamHost
Launch

26
Do it right
●

How do we build a strong open source company?

●

How do we build a strong open source community?

●

Models?
●

●

RedHat, Cloudera, MySQL, Canonical, …

Initial funding from DreamHost, Mark Shuttleworth
Goals
●

A stable Ceph release for production deployment
●

●

DreamObjects

Lay foundation for widespread adoption
●

Platform support (Ubuntu, Redhat, SuSE)

●

Documentation

●

Build and test infrastructure

●

Build a sales and support organization

●

Expand engineering organization
Branding
●

Early decision to engage professional agency
●

●

MetaDesign

Terms like
●

●

●

“Brand core”
“Design system”

Project vs Company
●

●

●

Shared / Separate / Shared core
Inktank != Ceph

Aspirational messaging: The Future of Storage
Slick graphics
●

broken powerpoint template

31
Today: adoption

32
Traction
●

Too many production deployments to count
●

We don't know about most of them!

●

Too many customers (for me) to count

●

Growing partner list
●

●

Lots of inbound

Lots of press and buzz
Quality
●

Increased adoption means increased demands on robust
testing

●

Across multiple platforms

●

Include platforms we don't like

●

Upgrades
●

●

●

Rolling upgrades
Inter-version compatibility

Expanding user community + less noise about bugs = a
good sign
Developer community
●

Significant external contributors

●

First-class feature contributions from contributors

●

Non-Inktank participants in daily Inktank stand-ups

●

External access to build/test lab infrastructure

●

Common toolset
●

●

Email (kernel.org)

●

●

Github
IRC (oftc.net)

Linux distros
CDS: Ceph Developer Summit
●

Community process for building project roadmap

●

100% online
●

Google hangouts

●

Wikis

●

Etherpad

●

First was this Spring, second is next week

●

Great feedback, growing participation

●

Indoctrinating our own developers to an open
development model
The Future

38
Governance
How do we strengthen the project community?

●

2014 is the year

●

Might formally acknowledge my role as BDL

●

Recognized project leads
●

RBD, RGW, RADOS, CephFS)

●

Formalize processes around CDS, community roadmap

●

External foundation?
Technical roadmap
●

How do we reach new use-cases and users

●

How do we better satisfy existing users

●

How do we ensure Ceph can succeed in enough markets
for Inktank to thrive

●

Enough breadth to expand and grow the community

●

Enough focus to do well
Tiering
●

●

Client side caches are great, but only buy so much.
Can we separate hot and cold data onto different storage
devices?
●

●

●

●

Cache pools: promote hot objects from an existing pool into a fast
(e.g., FusionIO) pool
Cold pools: demote cold data to a slow, archival pool (e.g.,
erasure coding)

How do you identify what is hot and cold?
Common in enterprise solutions; not found in open source
scale-out systems
→ key topic at CDS next week
Erasure coding
●

Replication for redundancy is flexible and fast

●

For larger clusters, it can be expensive
Storage
overhead
3x replication

Repair
traffic

MTTDL
(days)

1x

2.3 E10

RS (10, 4)

1.4x

10x

3.3 E13

LRC (10, 6, 5)
●

3x
1.6x

5x

1.2 E15

Erasure coded data is hard to modify, but ideal for cold or
read-only objects
●

Cold storage tiering

●

Will be used directly by radosgw
Multi-datacenter, geo-replication
●

Ceph was originally designed for single DC clusters
●

●

●

Synchronous replication
Strong consistency

Growing demand
●

●

●

Enterprise: disaster recovery
ISPs: replication data across sites for locality

Two strategies:
●

use-case specific: radosgw, RBD

●

low-level capability in RADOS
RGW: Multi-site and async replication
●

Multi-site, multi-cluster
●

●

Zones: radosgw sub-cluster(s) within a region

●

●

Regions: east coast, west coast, etc.
Can federate across same or multiple Ceph clusters

Sync user and bucket metadata across regions
●

●

Global bucket/user namespace, like S3

Synchronize objects across zones
●

Within the same region

●

Across regions

●

Admin control over which zones are master/slave
RBD: simple DR via snapshots
●

Simple backup capability
●

●

Based on block device snapshots
Efficiently mirror changes between consecutive snapshots across
clusters

●

Now supported/orchestrated by OpenStack

●

Good for coarse synchronization (e.g., hours)
●

Not real-time
Async replication in RADOS
●

One implementation to capture multiple use-cases
●

●

RBD, CephFS, RGW, … RADOS

A harder problem
●

●

●

Scalable: 1000s OSDs → 1000s of OSDs
Point-in-time consistency

Three challenges
●

Infer a partial ordering of events in the cluster

●

Maintain a stable timeline to stream from
–

●

either checkpoints or event stream

Coordinated roll-forward at destination
–

do not apply any update until we know we have everything that
happened before it
CephFS
→ This is where it all started – let's get there

●

Today
●

●

●

QA coverage and bug squashing continues
NFS and CIFS now large complete and robust

Need
●

●

Directory fragmentation

●

Snapshots

●

●

Multi-MDS

QA investment

Amazing community effort
The larger ecosystem
Big data
When will be stop talking about MapReduce?
Why is “big data” built on such a lame storage model?

●

Move computation to the data

●

Evangelize RADOS classes

●

librados case studies and proof points

●

Build a general purpose compute and storage platform
The enterprise
How do we pay for all our toys?

●

Support legacy and transitional interfaces
●

●

●

iSCSI, NFS, pNFS, CIFS
Vmware, Hyper-v

Identify the beachhead use-cases
●

Only takes one use-case to get in the door

●

Earn others later

●

Single platform – shared storage resource

●

Bottom-up: earn respect of engineers and admins

●

Top-down: strong brand and compelling product
Why we can beat the old guard
●

It is hard to compete with free and open source software
●

Unbeatable value proposition

●

Ultimately a more efficient development model

●

It is hard to manufacture community

●

Strong foundational architecture

●

Native protocols, Linux kernel support
●

●

●

Unencumbered by legacy protocols like NFS
Move beyond traditional client/server model

Ongoing paradigm shift
●

Software defined infrastructure, data center
Thank you, and Welcome!

Contenu connexe

Tendances

What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...Ian Colle
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
 
Cephfsglusterfs.talk
Cephfsglusterfs.talkCephfsglusterfs.talk
Cephfsglusterfs.talkUdo Seidel
 
OpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaOpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V SphereAnne Achleman
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ian Colle
 
Ostd.ksplice.talk
Ostd.ksplice.talkOstd.ksplice.talk
Ostd.ksplice.talkUdo Seidel
 
adp.ceph.openstack.talk
adp.ceph.openstack.talkadp.ceph.openstack.talk
adp.ceph.openstack.talkUdo Seidel
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
create auto scale jboss cluster with openshift
create auto scale jboss cluster with openshiftcreate auto scale jboss cluster with openshift
create auto scale jboss cluster with openshiftYusuf Hadiwinata Sutandar
 
Red Hat Gluster Storage : GlusterFS
Red Hat Gluster Storage : GlusterFSRed Hat Gluster Storage : GlusterFS
Red Hat Gluster Storage : GlusterFSbipin kunal
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storageAndrew Underwood
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a NutshellKaran Singh
 
Osdc2012 xtfs.talk
Osdc2012 xtfs.talkOsdc2012 xtfs.talk
Osdc2012 xtfs.talkUdo Seidel
 
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt Ceph Community
 
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData  Datastax webinar - Operating Cassandra on Kubernetes with the help ...MayaData  Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...MayaData Inc
 
Dustin Black - Red Hat Storage Server Administration Deep Dive
Dustin Black - Red Hat Storage Server Administration Deep DiveDustin Black - Red Hat Storage Server Administration Deep Dive
Dustin Black - Red Hat Storage Server Administration Deep DiveGluster.org
 

Tendances (20)

What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Cephfsglusterfs.talk
Cephfsglusterfs.talkCephfsglusterfs.talk
Cephfsglusterfs.talk
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
 
OpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of AlabamaOpenStack and Ceph case study at the University of Alabama
OpenStack and Ceph case study at the University of Alabama
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V Sphere
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014
 
Ostd.ksplice.talk
Ostd.ksplice.talkOstd.ksplice.talk
Ostd.ksplice.talk
 
adp.ceph.openstack.talk
adp.ceph.openstack.talkadp.ceph.openstack.talk
adp.ceph.openstack.talk
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
create auto scale jboss cluster with openshift
create auto scale jboss cluster with openshiftcreate auto scale jboss cluster with openshift
create auto scale jboss cluster with openshift
 
Red Hat Gluster Storage : GlusterFS
Red Hat Gluster Storage : GlusterFSRed Hat Gluster Storage : GlusterFS
Red Hat Gluster Storage : GlusterFS
 
GlusterFS And Big Data
GlusterFS And Big DataGlusterFS And Big Data
GlusterFS And Big Data
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storage
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a Nutshell
 
Osdc2012 xtfs.talk
Osdc2012 xtfs.talkOsdc2012 xtfs.talk
Osdc2012 xtfs.talk
 
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
 
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData  Datastax webinar - Operating Cassandra on Kubernetes with the help ...MayaData  Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...
 
Dustin Black - Red Hat Storage Server Administration Deep Dive
Dustin Black - Red Hat Storage Server Administration Deep DiveDustin Black - Red Hat Storage Server Administration Deep Dive
Dustin Black - Red Hat Storage Server Administration Deep Dive
 

Similaire à London Ceph Day Keynote: Building Tomorrow's Ceph

Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongPatrick McGarry
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project UpdateCeph Community
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific DashboardCeph Community
 
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Community
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfClyso GmbH
 
Introduction to OpenStack Storage
Introduction to OpenStack StorageIntroduction to OpenStack Storage
Introduction to OpenStack StorageNetApp
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications OpenEBS
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
Solving k8s persistent workloads using k8s DevOps style
Solving k8s persistent workloads using k8s DevOps styleSolving k8s persistent workloads using k8s DevOps style
Solving k8s persistent workloads using k8s DevOps styleMayaData
 
Benchmarking for postgresql workloads in kubernetes
Benchmarking for postgresql workloads in kubernetesBenchmarking for postgresql workloads in kubernetes
Benchmarking for postgresql workloads in kubernetesDoKC
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
Ippevent : openshift Introduction
Ippevent : openshift IntroductionIppevent : openshift Introduction
Ippevent : openshift Introductionkanedafromparis
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowEd Balduf
 
Container Attached Storage with OpenEBS - CNCF Paris Meetup
Container Attached Storage with OpenEBS - CNCF Paris MeetupContainer Attached Storage with OpenEBS - CNCF Paris Meetup
Container Attached Storage with OpenEBS - CNCF Paris MeetupMayaData Inc
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 

Similaire à London Ceph Day Keynote: Building Tomorrow's Ceph (20)

Ceph: A decade in the making and still going strong
Ceph: A decade in the making and still going strongCeph: A decade in the making and still going strong
Ceph: A decade in the making and still going strong
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project Update
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
 
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
 
Introduction to OpenStack Storage
Introduction to OpenStack StorageIntroduction to OpenStack Storage
Introduction to OpenStack Storage
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Solving k8s persistent workloads using k8s DevOps style
Solving k8s persistent workloads using k8s DevOps styleSolving k8s persistent workloads using k8s DevOps style
Solving k8s persistent workloads using k8s DevOps style
 
Benchmarking for postgresql workloads in kubernetes
Benchmarking for postgresql workloads in kubernetesBenchmarking for postgresql workloads in kubernetes
Benchmarking for postgresql workloads in kubernetes
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Ippevent : openshift Introduction
Ippevent : openshift IntroductionIppevent : openshift Introduction
Ippevent : openshift Introduction
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
 
Container Attached Storage with OpenEBS - CNCF Paris Meetup
Container Attached Storage with OpenEBS - CNCF Paris MeetupContainer Attached Storage with OpenEBS - CNCF Paris Meetup
Container Attached Storage with OpenEBS - CNCF Paris Meetup
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 

Dernier

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 

Dernier (20)

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 

London Ceph Day Keynote: Building Tomorrow's Ceph

  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 10.
  • 11. UCSC research grant ● “Petascale object storage” ● DOE: LANL, LLNL, Sandia ● Scalability ● Reliability ● Performance ● ● Raw IO bandwidth, metadata ops/sec HPC file system workloads ● Thousands of clients writing to same file, directory
  • 12. Distributed metadata management ● Innovative design ● Subtree-based partitioning for locality, efficiency ● Dynamically adapt to current workload ● Embedded inodes ● Prototype simulator in Java (2004) ● First line of Ceph code ● Summer internship at LLNL ● High security national lab environment ● Could write anything, as long as it was OSS
  • 13. The rest of Ceph ● RADOS – distributed object storage cluster (2005) ● EBOFS – local object storage (2004/2006) ● CRUSH – hashing for the real world (2005) ● Paxos monitors – cluster consensus (2006) → emphasis on consistent, reliable storage → scale by pushing intelligence to the edges → a different but compelling architecture
  • 14.
  • 15. Industry black hole ● Many large storage vendors ● ● Proprietary solutions that don't scale well Few open source alternatives (2006) ● ● Limited community and architecture (Lustre) ● ● Very limited scale, or No enterprise feature sets (snapshots, quotas) PhD grads all built interesting systems... ● ● ...and then went to work for Netapp, DDN, EMC, Veritas. They want you, not your project
  • 16. A different path ● Change the world with open source ● ● ● Do what Linux did to Solaris, Irix, Ultrix, etc. What could go wrong? License ● ● ● GPL, BSD... LGPL: share changes, okay to link to proprietary code Avoid unsavory practices ● Dual licensing ● Copyright assignment
  • 18.
  • 19. DreamHost! ● Move back to LA, continue hacking ● Hired a few developers ● Pure development ● No deliverables
  • 20. Ambitious feature set ● Native Linux kernel client (2007-) ● Per-directory snapshots (2008) ● Recursive accounting (2008) ● Object classes (2009) ● librados (2009) ● radosgw (2009) ● strong authentication (2009) ● RBD: rados block device (2010)
  • 21. The kernel client ● ceph-fuse was limited, not very fast ● Build native Linux kernel implementation ● Began attending Linux file system developer events (LSF) ● ● ● Early words of encouragement from ex-Lustre devs Engage Linux fs developer community as peer Initial attempts merge rejected by Linus ● ● ● Not sufficient evidence of user demand A few fans and would-be users chimed in... Eventually merged for v2.6.34 (early 2010)
  • 22. Part of a larger ecosystem ● Ceph need not solve all problems as monolithic stack ● Replaced ebofs object file system with btrfs ● ● Avoid reinventing the wheel ● Robust, well-supported, well optimized ● Kernel-level cache management ● ● Same design goals Copy-on-write, checksumming, other goodness Contributed some early functionality ● Cloning files ● Async snapshots
  • 23. Budding community ● #ceph on irc.oftc.net, ceph-devel@vger.kernel.org ● Many interested users ● A few developers ● Many fans ● Too unstable for any real deployments ● Still mostly focused on right architecture and technical solutions
  • 24. Road to product ● ● DreamHost decides to build an S3-compatible object storage service with Ceph Stability ● ● Focus on core RADOS, RBD, radosgw Paying back some technical debt ● ● ● Build testing automation Code review! Expand engineering team
  • 25. The reality ● Growing incoming commercial interest ● Early attempts from organizations large and small ● Difficult to engage with a web hosting company ● No means to support commercial deployments ● Project needed a company to back it ● ● Build and test a product ● ● Fund the engineering effort Support users Bryan built a framework to spin out of DreamHost
  • 27.
  • 28. Do it right ● How do we build a strong open source company? ● How do we build a strong open source community? ● Models? ● ● RedHat, Cloudera, MySQL, Canonical, … Initial funding from DreamHost, Mark Shuttleworth
  • 29. Goals ● A stable Ceph release for production deployment ● ● DreamObjects Lay foundation for widespread adoption ● Platform support (Ubuntu, Redhat, SuSE) ● Documentation ● Build and test infrastructure ● Build a sales and support organization ● Expand engineering organization
  • 30. Branding ● Early decision to engage professional agency ● ● MetaDesign Terms like ● ● ● “Brand core” “Design system” Project vs Company ● ● ● Shared / Separate / Shared core Inktank != Ceph Aspirational messaging: The Future of Storage
  • 33.
  • 34. Traction ● Too many production deployments to count ● We don't know about most of them! ● Too many customers (for me) to count ● Growing partner list ● ● Lots of inbound Lots of press and buzz
  • 35. Quality ● Increased adoption means increased demands on robust testing ● Across multiple platforms ● Include platforms we don't like ● Upgrades ● ● ● Rolling upgrades Inter-version compatibility Expanding user community + less noise about bugs = a good sign
  • 36. Developer community ● Significant external contributors ● First-class feature contributions from contributors ● Non-Inktank participants in daily Inktank stand-ups ● External access to build/test lab infrastructure ● Common toolset ● ● Email (kernel.org) ● ● Github IRC (oftc.net) Linux distros
  • 37. CDS: Ceph Developer Summit ● Community process for building project roadmap ● 100% online ● Google hangouts ● Wikis ● Etherpad ● First was this Spring, second is next week ● Great feedback, growing participation ● Indoctrinating our own developers to an open development model
  • 39. Governance How do we strengthen the project community? ● 2014 is the year ● Might formally acknowledge my role as BDL ● Recognized project leads ● RBD, RGW, RADOS, CephFS) ● Formalize processes around CDS, community roadmap ● External foundation?
  • 40. Technical roadmap ● How do we reach new use-cases and users ● How do we better satisfy existing users ● How do we ensure Ceph can succeed in enough markets for Inktank to thrive ● Enough breadth to expand and grow the community ● Enough focus to do well
  • 41. Tiering ● ● Client side caches are great, but only buy so much. Can we separate hot and cold data onto different storage devices? ● ● ● ● Cache pools: promote hot objects from an existing pool into a fast (e.g., FusionIO) pool Cold pools: demote cold data to a slow, archival pool (e.g., erasure coding) How do you identify what is hot and cold? Common in enterprise solutions; not found in open source scale-out systems → key topic at CDS next week
  • 42. Erasure coding ● Replication for redundancy is flexible and fast ● For larger clusters, it can be expensive Storage overhead 3x replication Repair traffic MTTDL (days) 1x 2.3 E10 RS (10, 4) 1.4x 10x 3.3 E13 LRC (10, 6, 5) ● 3x 1.6x 5x 1.2 E15 Erasure coded data is hard to modify, but ideal for cold or read-only objects ● Cold storage tiering ● Will be used directly by radosgw
  • 43. Multi-datacenter, geo-replication ● Ceph was originally designed for single DC clusters ● ● ● Synchronous replication Strong consistency Growing demand ● ● ● Enterprise: disaster recovery ISPs: replication data across sites for locality Two strategies: ● use-case specific: radosgw, RBD ● low-level capability in RADOS
  • 44. RGW: Multi-site and async replication ● Multi-site, multi-cluster ● ● Zones: radosgw sub-cluster(s) within a region ● ● Regions: east coast, west coast, etc. Can federate across same or multiple Ceph clusters Sync user and bucket metadata across regions ● ● Global bucket/user namespace, like S3 Synchronize objects across zones ● Within the same region ● Across regions ● Admin control over which zones are master/slave
  • 45. RBD: simple DR via snapshots ● Simple backup capability ● ● Based on block device snapshots Efficiently mirror changes between consecutive snapshots across clusters ● Now supported/orchestrated by OpenStack ● Good for coarse synchronization (e.g., hours) ● Not real-time
  • 46. Async replication in RADOS ● One implementation to capture multiple use-cases ● ● RBD, CephFS, RGW, … RADOS A harder problem ● ● ● Scalable: 1000s OSDs → 1000s of OSDs Point-in-time consistency Three challenges ● Infer a partial ordering of events in the cluster ● Maintain a stable timeline to stream from – ● either checkpoints or event stream Coordinated roll-forward at destination – do not apply any update until we know we have everything that happened before it
  • 47. CephFS → This is where it all started – let's get there ● Today ● ● ● QA coverage and bug squashing continues NFS and CIFS now large complete and robust Need ● ● Directory fragmentation ● Snapshots ● ● Multi-MDS QA investment Amazing community effort
  • 49. Big data When will be stop talking about MapReduce? Why is “big data” built on such a lame storage model? ● Move computation to the data ● Evangelize RADOS classes ● librados case studies and proof points ● Build a general purpose compute and storage platform
  • 50. The enterprise How do we pay for all our toys? ● Support legacy and transitional interfaces ● ● ● iSCSI, NFS, pNFS, CIFS Vmware, Hyper-v Identify the beachhead use-cases ● Only takes one use-case to get in the door ● Earn others later ● Single platform – shared storage resource ● Bottom-up: earn respect of engineers and admins ● Top-down: strong brand and compelling product
  • 51. Why we can beat the old guard ● It is hard to compete with free and open source software ● Unbeatable value proposition ● Ultimately a more efficient development model ● It is hard to manufacture community ● Strong foundational architecture ● Native protocols, Linux kernel support ● ● ● Unencumbered by legacy protocols like NFS Move beyond traditional client/server model Ongoing paradigm shift ● Software defined infrastructure, data center
  • 52. Thank you, and Welcome!

Notes de l'éditeur

  1. <number>
  2. <number>
  3. <number>