Talk from 05 June 2014 NYLUG meeting at Bloomberg NYC. Short history of where Ceph came from, an architectural overview, and the current state of the community.
15. RADOS COMPONENTS
15
OSDs:
10s to 10000s in a cluster
One per disk (or one per SSD, RAID group…)
Serve stored objects to clients
Intelligently peer for replication & recovery
Monitors:
Maintain cluster membership and state
Provide consensus for distributed decision-
making
Small, odd number
These do not serve stored objects to clients
24. LIBRADOS: RADOS ACCESS FOR
APPS
24
LIBRADOS:
Direct access to RADOS for applications
C, C++, Python, PHP, Java, Erlang
Direct access to storage nodes
No HTTP overhead
27. RADOSGW MAKES RADOS
WEBBY
27
RADOSGW:
REST-based object storage proxy
Uses RADOS to store objects
API supports buckets, accounts
Usage accounting for billing
Compatible with S3 and Swift applications
35. SCALABLE METADATA SERVERS
35
METADATA SERVER
Manages metadata for a POSIX-compliant
shared filesystem
Directory hierarchy
File metadata (owner, timestamps, mode,
etc.)
Stores metadata in RADOS
Does not serve file data to clients
Only required for shared filesystem
38. Ceph Developer Summit
38
• Recent: “Giant”
• March 04-05
• wiki.ceph.com
• Virtual
(irc, hangout,
pad, blueprint,
youtube)
• 2 days
(soon to be 3?)
• Discuss all work
• Recruit for your
projects!
41. Accepted as a mentoring organization
8 mentors from Inktank & Community
http://ceph.com/gsoc2014/
2 student proposals accepted
Hope to turn this into academic outreach
Google Summer of Code
2014
41
42. Ceph Days
42
• inktank.com/
cephdays
• Recently:
London,
Frankfurt, NYC,
Santa Clara
• Aggressive
program
• Upcoming:
Sunnyvale,
Austin, Boston,
Kuala Lumpur