SlideShare une entreprise Scribd logo
1  sur  39
Télécharger pour lire hors ligne
Facebook Messages & HBase
                Nicolas Spiegelberg
                Software Engineer, Facebook




April 8, 2011
Talk Outline

▪   About Facebook Messages
▪   Intro to HBase
▪   Why HBase
▪   HBase Contributions
▪   MySQL -> HBase Migration
▪   Future Plans
▪   Q&A
Facebook Messages
The New Facebook Messages

 Messages   Chats    Emails   SMS
Monthly data volume prior to launch


           15B x 1,024 bytes = 14TB



           120B x 100 bytes = 11TB
Messaging Data
▪   Small/medium sized data              HBase
    ▪   Message metadata & indices
    ▪   Search index
    ▪   Small message bodies


▪   Attachments and large messages                Haystack
    ▪   Used for our existing photo/video store
Open Source Stack
▪   Memcached   -->   App Server Cache
▪   ZooKeeper   -->   Small Data Coordination Service
▪   HBase             -->   Database Storage Engine
▪   HDFS        -->   Distributed FileSystem
▪   Hadoop      -->   Asynchronous Map-Reduce Jobs
Our architecture
                                                           User Directory Service
           Clients
   (Front End, MTA, etc.)
                            What’s the cell for
                               this user?
                                                  Cell 2
        Cell 1                                                   Cell 3

                                  Cell 1   Application Server
 Application Server                                  Application Server

                                                  Attachments
                                             HBase/HDFS/Z
  HBase/HDFS/Z                               KMessage, Metadata,
                                                       HBase/HDFS/Z
  K                                             Search Index
                                                       K


                            Haystack
About HBase
HBase in a nutshell

• distributed, large-scale data store

• efficient at random reads/writes

• initially modeled after Google’s BigTable

• open source project (Apache)
When to use HBase?

▪ storing large amounts of data

▪ need high write throughput

▪ need efficient random access within large data sets

▪ need to scale gracefully with data

▪ for structured and semi-structured data

▪ don’t need full RDMS capabilities (cross table transactions, joins, etc.)
HBase Data Model
• An HBase table is:
 •   a sparse , three-dimensional array of cells, indexed by:
      RowKey, ColumnKey, Timestamp/Version
 •   sharded into regions along an ordered RowKey space

• Within each region:
 •   Data is grouped into column families
 ▪   Sort order within each column family:

      Row Key (asc), Column Key (asc), Timestamp (desc)
Example: Inbox Search
• Schema
 •   Key: RowKey: userid, Column: word, Version: MessageID
 •   Value: Auxillary info (like offset of word in message)

• Data is stored sorted by <userid, word, messageID>:

      User1:hi:17->offset1
                                 Can efficiently handle queries like:
      User1:hi:16->offset2
      User1:hello:16->offset3    - Get top N messageIDs for a
      User1:hello:2->offset4       specific user & word
      ...
      User2:....                 - Typeahead query: for a given user,
      User2:...                    get words that match a prefix
      ...
HBase System Overview
                             Database Layer

   HBASE
       Master      Backup
                   Master
   Region           Region                    Region    ...
   Server           Server                    Server
                                                              Coordination Service
            Storage Layer

HDFS                                                   Zookeeper Quorum
   Namenode          Secondary Namenode                ZK      ZK           ...
                                                       Peer    Peer
Datanode Datanode           Datanode          ...
HBase Overview
HBASE Region Server
                ....
            Region #2
       Region #1
                        ....
                  ColumnFamily #2

            ColumnFamily #1                Memstore
                                    (in memory data structure)


                HFiles (in HDFS)                             flush




  Write Ahead Log ( in HDFS)
HBase Overview
•       Very good at random reads/writes

•       Write path
    •    Sequential write/sync to commit log
    •    update memstore

•       Read path
    •    Lookup memstore & persistent HFiles
    •    HFile data is sorted and has a block index for efficient retrieval

•       Background chores
    •    Flushes (memstore -> HFile)
    •    Compactions (group of HFiles merged into one)
Why HBase?
Performance is great, but what else…
Horizontal scalability
▪   HBase & HDFS are elastic by design
▪   Multiple table shards (regions) per physical server
▪   On node additions
    ▪   Load balancer automatically reassigns shards from overloaded
        nodes to new nodes
    ▪   Because filesystem underneath is itself distributed, data for
        reassigned regions is instantly servable from the new nodes.
▪   Regions can be dynamically split into smaller regions.
    ▪   Pre-sharding is not necessary
    ▪   Splits are near instantaneous!
Automatic Failover
▪   Node failures automatically detected by HBase Master
▪   Regions on failed node are distributed evenly among surviving nodes.
        ▪   Multiple regions/server model avoids need for substantial
            overprovisioning
▪   HBase Master failover
    ▪   1 active, rest standby
    ▪   When active master fails, a standby automatically takes over
HBase uses HDFS
We get the benefits of HDFS as a storage system for free
▪   Fault tolerance (block level replication for redundancy)
▪   Scalability
▪   End-to-end checksums to detect and recover from corruptions
▪   Map Reduce for large scale data processing
▪   HDFS already battle tested inside Facebook
    ▪   running petabyte scale clusters
    ▪   lot of in-house development and operational experience
Simpler Consistency Model
▪   HBase’s strong consistency model
     ▪   simpler for a wide variety of applications to deal with
     ▪   client gets same answer no matter which replica data is read from


▪   Eventual consistency: tricky for applications fronted by a cache
     ▪   replicas may heal eventually during failures
     ▪   but stale data could remain stuck in cache
Typical Cluster Layout
 ▪   Multiple clusters/cells for messaging
     ▪   20 servers/rack; 5 or more racks per cluster
 ▪   Controllers (master/Zookeeper) spread across racks
 ZooKeeper Peer       ZooKeeper Peer     ZooKeeper Peer    ZooKeeper Peer    ZooKeeper Peer
 HDFS Namenode        Backup Namenode    Job Tracker       Hbase Master      Backup Master

Region Server        Region Server      Region Server     Region Server     Region Server
Data Node            Data Node          Data Node         Data Node         Data Node
Task Tracker         Task Tracker       Task Tracker      Task Tracker      Task Tracker



19x...                19x...            19x...            19x...            19x...


Region Server        Region Server      Region Server     Region Server     Region Server
Data Node            Data Node          Data Node         Data Node         Data Node
Task Tracker         Task Tracker       Task Tracker      Task Tracker      Task Tracker

         Rack #1          Rack #2            Rack #3           Rack #4           Rack #5
HBase Enhancements
Goal: Zero Data Loss
Goal of Zero Data Loss/Correctness
▪   sync support added to hadoop-20 branch
    ▪   for keeping transaction log (WAL) in HDFS
    ▪   to guarantee durability of transactions
▪   Row-level ACID compliance
▪   Enhanced HDFS’s Block Placement Policy:
    ▪   Original: rack aware, but minimally constrained
    ▪   Now: Placement of replicas constrained to configurable node groups
    ▪   Result: Data loss probability reduced by orders of magnitude
Availability/Stability improvements
▪   HBase master rewrite- region assignments using ZK
▪   Rolling Restarts – doing software upgrades without a downtime
▪   Interrupt Compactions – prioritize availability over minor perf gains
▪   Timeouts on client-server RPCs
▪   Staggered major compaction to avoid compaction storms
Performance Improvements
▪   Compactions
    ▪   critical for read performance
    ▪   Improved compaction algo
    ▪   delete/TTL/overwrite processing in minor compactions
▪   Read optimizations:
    ▪   Seek optimizations for rows with large number of cells
    ▪   Bloom filters to minimize HFile lookups
    ▪   Timerange hints on HFiles (great for temporal data)
    ▪   Improved handling of compressed HFiles
Operational Experiences
▪   Darklaunch:
    ▪   shadow traffic on test clusters for continuous, at scale testing
    ▪   experiment/tweak knobs
    ▪   simulate failures, test rolling upgrades
▪   Constant (pre-sharding) region count & controlled rolling splits
▪   Administrative tools and monitoring
    ▪   Alerts (HBCK, memory alerts, perf alerts, health alerts)
    ▪   auto detecting/decommissioning misbehaving machines
    ▪   Dashboards
▪   Application level backup/recovery pipeline
Working within the Apache community
▪   Growing with the community
    ▪   Started with a stable, healthy project
    ▪   In house expertise in both HDFS and HBase
    ▪   Increasing community involvement
▪   Undertook massive feature improvements with community help
    ▪   HDFS 0.20-append branch
    ▪   HBase Master rewrite
▪   Continually interacting with the community to identify and fix issues
    ▪   e.g., large responses (2GB RPC)
Data migration
Another place we used HBase heavily…
Move messaging data from MySQL to
HBase
Move messaging data from MySQL to HBase
▪   In MySQL, inbox data was kept normalized
    ▪   user’s messages are stored across many different machines
▪   Migrating a user is basically one big join across tables spread over
    many different machines
▪   Multiple terabytes of data (for over 500M users)
▪   Cannot pound 1000s of production UDBs to migrate users
How we migrated
▪   Periodically, get a full export of all the users’ inbox data in MySQL
▪   And, use bulk loader to import the above into a migration HBase
    cluster
▪   To migrate users:
    ▪   Since users may continue to receive messages during migration:
        ▪   double-write (to old and new system) during the migration period
    ▪   Get a list of all recent messages (since last MySQL export) for the
        user
        ▪   Load new messages into the migration HBase cluster
        ▪   Perform the join operations to generate the new data
        ▪   Export it and upload into the final cluster
Future Plans
HBase Expands
Facebook Insights Goes Real-Time
▪   Recently launched real-time analytics for social plugins on top of
    HBase
▪   Publishers get real-time distribution/engagement metrics:
        ▪   # of impressions, likes
        ▪   analytics by
            ▪   Domain, URL, demographics
            ▪   Over various time periods (the last hour, day, all-time)
▪   Makes use of HBase capabilities like:
    ▪   Efficient counters (read-modify-write increment operations)
    ▪   TTL for purging old data
Future Work
It is still early days…!
▪   Namenode HA (AvatarNode)
▪   Fast hot-backups (Export/Import)
▪   Online schema & config changes
▪   Running HBase as a service (multi-tenancy)
▪   Features (like secondary indices, batching hybrid mutations)
▪   Cross-DC replication
▪   Lot more performance/availability improvements
Thanks! Questions?
facebook.com/engineering
杭州站 · 2011年10月20日~22日
 www.qconhangzhou.com(6月启动)




QCon北京站官方网站和资料下载
     www.qconbeijing.com

Contenu connexe

Tendances

Hadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
Hadoop World 2011: Advanced HBase Schema Design - Lars George, ClouderaHadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
Hadoop World 2011: Advanced HBase Schema Design - Lars George, ClouderaCloudera, Inc.
 
Hadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at FacebookHadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at FacebookDataWorks Summit
 
Apache HBase 1.0 Release
Apache HBase 1.0 ReleaseApache HBase 1.0 Release
Apache HBase 1.0 ReleaseNick Dimiduk
 
Hw09 Practical HBase Getting The Most From Your H Base Install
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base InstallCloudera, Inc.
 
HBase Read High Availability Using Timeline Consistent Region Replicas
HBase  Read High Availability Using Timeline Consistent Region ReplicasHBase  Read High Availability Using Timeline Consistent Region Replicas
HBase Read High Availability Using Timeline Consistent Region Replicasenissoz
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance TuningLars Hofhansl
 
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, Informatica
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, InformaticaHadoop World 2011: Practical HBase - Ravi Veeramchaneni, Informatica
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, InformaticaCloudera, Inc.
 
Intro to HBase - Lars George
Intro to HBase - Lars GeorgeIntro to HBase - Lars George
Intro to HBase - Lars GeorgeJAX London
 
Hadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment EvolutionHadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment EvolutionBenoit Perroud
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012larsgeorge
 
Hadoop World 2011: Apache HBase Road Map - Jonathan Gray - Facebook
Hadoop World 2011: Apache HBase Road Map - Jonathan Gray - FacebookHadoop World 2011: Apache HBase Road Map - Jonathan Gray - Facebook
Hadoop World 2011: Apache HBase Road Map - Jonathan Gray - FacebookCloudera, Inc.
 
HBaseCon 2013: Compaction Improvements in Apache HBase
HBaseCon 2013: Compaction Improvements in Apache HBaseHBaseCon 2013: Compaction Improvements in Apache HBase
HBaseCon 2013: Compaction Improvements in Apache HBaseCloudera, Inc.
 
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...Cloudera, Inc.
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practicelarsgeorge
 
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseHBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseCloudera, Inc.
 

Tendances (19)

Hadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
Hadoop World 2011: Advanced HBase Schema Design - Lars George, ClouderaHadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
Hadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
 
Hadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at FacebookHadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at Facebook
 
Apache HBase 1.0 Release
Apache HBase 1.0 ReleaseApache HBase 1.0 Release
Apache HBase 1.0 Release
 
Hw09 Practical HBase Getting The Most From Your H Base Install
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base Install
 
HBase Read High Availability Using Timeline Consistent Region Replicas
HBase  Read High Availability Using Timeline Consistent Region ReplicasHBase  Read High Availability Using Timeline Consistent Region Replicas
HBase Read High Availability Using Timeline Consistent Region Replicas
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
 
Apache hadoop hbase
Apache hadoop hbaseApache hadoop hbase
Apache hadoop hbase
 
Hbase
HbaseHbase
Hbase
 
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, Informatica
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, InformaticaHadoop World 2011: Practical HBase - Ravi Veeramchaneni, Informatica
Hadoop World 2011: Practical HBase - Ravi Veeramchaneni, Informatica
 
Intro to HBase - Lars George
Intro to HBase - Lars GeorgeIntro to HBase - Lars George
Intro to HBase - Lars George
 
Hadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment EvolutionHadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment Evolution
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
 
Hadoop World 2011: Apache HBase Road Map - Jonathan Gray - Facebook
Hadoop World 2011: Apache HBase Road Map - Jonathan Gray - FacebookHadoop World 2011: Apache HBase Road Map - Jonathan Gray - Facebook
Hadoop World 2011: Apache HBase Road Map - Jonathan Gray - Facebook
 
HBaseCon 2013: Compaction Improvements in Apache HBase
HBaseCon 2013: Compaction Improvements in Apache HBaseHBaseCon 2013: Compaction Improvements in Apache HBase
HBaseCon 2013: Compaction Improvements in Apache HBase
 
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
HBaseCon 2012 | Gap Inc Direct: Serving Apparel Catalog from HBase for Live W...
 
HBase internals
HBase internalsHBase internals
HBase internals
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practice
 
Hbase: an introduction
Hbase: an introductionHbase: an introduction
Hbase: an introduction
 
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseHBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
 

En vedette (9)

אתגרי ניהול ויישום סוקס
אתגרי ניהול ויישום סוקסאתגרי ניהול ויישום סוקס
אתגרי ניהול ויישום סוקס
 
תורת המכרזים
תורת המכרזיםתורת המכרזים
תורת המכרזים
 
שינויים ועדכונים בתקנים המקצועיים
שינויים ועדכונים בתקנים המקצועייםשינויים ועדכונים בתקנים המקצועיים
שינויים ועדכונים בתקנים המקצועיים
 
Tup2王鹏云:实时搜索架构分析
Tup2王鹏云:实时搜索架构分析Tup2王鹏云:实时搜索架构分析
Tup2王鹏云:实时搜索架构分析
 
אורות אדומים לתיאום מכרזים בישראל
אורות אדומים לתיאום מכרזים בישראלאורות אדומים לתיאום מכרזים בישראל
אורות אדומים לתיאום מכרזים בישראל
 
How Art Works: March 14 Web Conversation Deck
How Art Works: March 14 Web Conversation DeckHow Art Works: March 14 Web Conversation Deck
How Art Works: March 14 Web Conversation Deck
 
Tup2 人人网张铁安
Tup2 人人网张铁安Tup2 人人网张铁安
Tup2 人人网张铁安
 
תמורות בביקורת הפנימית
תמורות בביקורת הפנימית תמורות בביקורת הפנימית
תמורות בביקורת הפנימית
 
10 17-11 katherine-fulton_nw_presentation
10 17-11 katherine-fulton_nw_presentation10 17-11 katherine-fulton_nw_presentation
10 17-11 katherine-fulton_nw_presentation
 

Similaire à 支撑Facebook消息处理的h base存储系统

Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messagesyarapavan
 
[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)baggioss
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfsNAVER D2
 
Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012StampedeCon
 
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...Yahoo Developer Network
 
Data Storage and Management project Report
Data Storage and Management project ReportData Storage and Management project Report
Data Storage and Management project ReportTushar Dalvi
 
Large-scale Web Apps @ Pinterest
Large-scale Web Apps @ PinterestLarge-scale Web Apps @ Pinterest
Large-scale Web Apps @ PinterestHBaseCon
 
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPERCCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPERKrishnaVeni451953
 
Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Cloudera, Inc.
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryCloudera, Inc.
 
Hbase status quo apache-con europe - nov 2012
Hbase status quo   apache-con europe - nov 2012Hbase status quo   apache-con europe - nov 2012
Hbase status quo apache-con europe - nov 2012Chris Huang
 
Techincal Talk Hbase-Ditributed,no-sql database
Techincal Talk Hbase-Ditributed,no-sql databaseTechincal Talk Hbase-Ditributed,no-sql database
Techincal Talk Hbase-Ditributed,no-sql databaseRishabh Dugar
 
Chicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An IntroductionChicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An IntroductionCloudera, Inc.
 
Introduction to Apache HBase
Introduction to Apache HBaseIntroduction to Apache HBase
Introduction to Apache HBaseGokuldas Pillai
 

Similaire à 支撑Facebook消息处理的h base存储系统 (20)

Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messages
 
[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)
 
Hbase 20141003
Hbase 20141003Hbase 20141003
Hbase 20141003
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 
Apache HBase™
Apache HBase™Apache HBase™
Apache HBase™
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfs
 
Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012
 
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
 
Data Storage and Management project Report
Data Storage and Management project ReportData Storage and Management project Report
Data Storage and Management project Report
 
Large-scale Web Apps @ Pinterest
Large-scale Web Apps @ PinterestLarge-scale Web Apps @ Pinterest
Large-scale Web Apps @ Pinterest
 
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPERCCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPER
 
Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster Recovery
 
Hbase status quo apache-con europe - nov 2012
Hbase status quo   apache-con europe - nov 2012Hbase status quo   apache-con europe - nov 2012
Hbase status quo apache-con europe - nov 2012
 
Big data Hadoop
Big data  Hadoop   Big data  Hadoop
Big data Hadoop
 
Techincal Talk Hbase-Ditributed,no-sql database
Techincal Talk Hbase-Ditributed,no-sql databaseTechincal Talk Hbase-Ditributed,no-sql database
Techincal Talk Hbase-Ditributed,no-sql database
 
Chicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An IntroductionChicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An Introduction
 
Introduction to Apache HBase
Introduction to Apache HBaseIntroduction to Apache HBase
Introduction to Apache HBase
 
Hbase mhug 2015
Hbase mhug 2015Hbase mhug 2015
Hbase mhug 2015
 

支撑Facebook消息处理的h base存储系统

  • 1. Facebook Messages & HBase Nicolas Spiegelberg Software Engineer, Facebook April 8, 2011
  • 2. Talk Outline ▪ About Facebook Messages ▪ Intro to HBase ▪ Why HBase ▪ HBase Contributions ▪ MySQL -> HBase Migration ▪ Future Plans ▪ Q&A
  • 4. The New Facebook Messages Messages Chats Emails SMS
  • 5.
  • 6. Monthly data volume prior to launch 15B x 1,024 bytes = 14TB 120B x 100 bytes = 11TB
  • 7. Messaging Data ▪ Small/medium sized data HBase ▪ Message metadata & indices ▪ Search index ▪ Small message bodies ▪ Attachments and large messages Haystack ▪ Used for our existing photo/video store
  • 8. Open Source Stack ▪ Memcached --> App Server Cache ▪ ZooKeeper --> Small Data Coordination Service ▪ HBase --> Database Storage Engine ▪ HDFS --> Distributed FileSystem ▪ Hadoop --> Asynchronous Map-Reduce Jobs
  • 9. Our architecture User Directory Service Clients (Front End, MTA, etc.) What’s the cell for this user? Cell 2 Cell 1 Cell 3 Cell 1 Application Server Application Server Application Server Attachments HBase/HDFS/Z HBase/HDFS/Z KMessage, Metadata, HBase/HDFS/Z K Search Index K Haystack
  • 11. HBase in a nutshell • distributed, large-scale data store • efficient at random reads/writes • initially modeled after Google’s BigTable • open source project (Apache)
  • 12. When to use HBase? ▪ storing large amounts of data ▪ need high write throughput ▪ need efficient random access within large data sets ▪ need to scale gracefully with data ▪ for structured and semi-structured data ▪ don’t need full RDMS capabilities (cross table transactions, joins, etc.)
  • 13. HBase Data Model • An HBase table is: • a sparse , three-dimensional array of cells, indexed by: RowKey, ColumnKey, Timestamp/Version • sharded into regions along an ordered RowKey space • Within each region: • Data is grouped into column families ▪ Sort order within each column family: Row Key (asc), Column Key (asc), Timestamp (desc)
  • 14. Example: Inbox Search • Schema • Key: RowKey: userid, Column: word, Version: MessageID • Value: Auxillary info (like offset of word in message) • Data is stored sorted by <userid, word, messageID>: User1:hi:17->offset1 Can efficiently handle queries like: User1:hi:16->offset2 User1:hello:16->offset3 - Get top N messageIDs for a User1:hello:2->offset4 specific user & word ... User2:.... - Typeahead query: for a given user, User2:... get words that match a prefix ...
  • 15. HBase System Overview Database Layer HBASE Master Backup Master Region Region Region ... Server Server Server Coordination Service Storage Layer HDFS Zookeeper Quorum Namenode Secondary Namenode ZK ZK ... Peer Peer Datanode Datanode Datanode ...
  • 16. HBase Overview HBASE Region Server .... Region #2 Region #1 .... ColumnFamily #2 ColumnFamily #1 Memstore (in memory data structure) HFiles (in HDFS) flush Write Ahead Log ( in HDFS)
  • 17. HBase Overview • Very good at random reads/writes • Write path • Sequential write/sync to commit log • update memstore • Read path • Lookup memstore & persistent HFiles • HFile data is sorted and has a block index for efficient retrieval • Background chores • Flushes (memstore -> HFile) • Compactions (group of HFiles merged into one)
  • 18. Why HBase? Performance is great, but what else…
  • 19. Horizontal scalability ▪ HBase & HDFS are elastic by design ▪ Multiple table shards (regions) per physical server ▪ On node additions ▪ Load balancer automatically reassigns shards from overloaded nodes to new nodes ▪ Because filesystem underneath is itself distributed, data for reassigned regions is instantly servable from the new nodes. ▪ Regions can be dynamically split into smaller regions. ▪ Pre-sharding is not necessary ▪ Splits are near instantaneous!
  • 20. Automatic Failover ▪ Node failures automatically detected by HBase Master ▪ Regions on failed node are distributed evenly among surviving nodes. ▪ Multiple regions/server model avoids need for substantial overprovisioning ▪ HBase Master failover ▪ 1 active, rest standby ▪ When active master fails, a standby automatically takes over
  • 21. HBase uses HDFS We get the benefits of HDFS as a storage system for free ▪ Fault tolerance (block level replication for redundancy) ▪ Scalability ▪ End-to-end checksums to detect and recover from corruptions ▪ Map Reduce for large scale data processing ▪ HDFS already battle tested inside Facebook ▪ running petabyte scale clusters ▪ lot of in-house development and operational experience
  • 22. Simpler Consistency Model ▪ HBase’s strong consistency model ▪ simpler for a wide variety of applications to deal with ▪ client gets same answer no matter which replica data is read from ▪ Eventual consistency: tricky for applications fronted by a cache ▪ replicas may heal eventually during failures ▪ but stale data could remain stuck in cache
  • 23. Typical Cluster Layout ▪ Multiple clusters/cells for messaging ▪ 20 servers/rack; 5 or more racks per cluster ▪ Controllers (master/Zookeeper) spread across racks ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer HDFS Namenode Backup Namenode Job Tracker Hbase Master Backup Master Region Server Region Server Region Server Region Server Region Server Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker 19x... 19x... 19x... 19x... 19x... Region Server Region Server Region Server Region Server Region Server Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker Rack #1 Rack #2 Rack #3 Rack #4 Rack #5
  • 26. Goal of Zero Data Loss/Correctness ▪ sync support added to hadoop-20 branch ▪ for keeping transaction log (WAL) in HDFS ▪ to guarantee durability of transactions ▪ Row-level ACID compliance ▪ Enhanced HDFS’s Block Placement Policy: ▪ Original: rack aware, but minimally constrained ▪ Now: Placement of replicas constrained to configurable node groups ▪ Result: Data loss probability reduced by orders of magnitude
  • 27. Availability/Stability improvements ▪ HBase master rewrite- region assignments using ZK ▪ Rolling Restarts – doing software upgrades without a downtime ▪ Interrupt Compactions – prioritize availability over minor perf gains ▪ Timeouts on client-server RPCs ▪ Staggered major compaction to avoid compaction storms
  • 28. Performance Improvements ▪ Compactions ▪ critical for read performance ▪ Improved compaction algo ▪ delete/TTL/overwrite processing in minor compactions ▪ Read optimizations: ▪ Seek optimizations for rows with large number of cells ▪ Bloom filters to minimize HFile lookups ▪ Timerange hints on HFiles (great for temporal data) ▪ Improved handling of compressed HFiles
  • 29. Operational Experiences ▪ Darklaunch: ▪ shadow traffic on test clusters for continuous, at scale testing ▪ experiment/tweak knobs ▪ simulate failures, test rolling upgrades ▪ Constant (pre-sharding) region count & controlled rolling splits ▪ Administrative tools and monitoring ▪ Alerts (HBCK, memory alerts, perf alerts, health alerts) ▪ auto detecting/decommissioning misbehaving machines ▪ Dashboards ▪ Application level backup/recovery pipeline
  • 30. Working within the Apache community ▪ Growing with the community ▪ Started with a stable, healthy project ▪ In house expertise in both HDFS and HBase ▪ Increasing community involvement ▪ Undertook massive feature improvements with community help ▪ HDFS 0.20-append branch ▪ HBase Master rewrite ▪ Continually interacting with the community to identify and fix issues ▪ e.g., large responses (2GB RPC)
  • 31. Data migration Another place we used HBase heavily…
  • 32. Move messaging data from MySQL to HBase
  • 33. Move messaging data from MySQL to HBase ▪ In MySQL, inbox data was kept normalized ▪ user’s messages are stored across many different machines ▪ Migrating a user is basically one big join across tables spread over many different machines ▪ Multiple terabytes of data (for over 500M users) ▪ Cannot pound 1000s of production UDBs to migrate users
  • 34. How we migrated ▪ Periodically, get a full export of all the users’ inbox data in MySQL ▪ And, use bulk loader to import the above into a migration HBase cluster ▪ To migrate users: ▪ Since users may continue to receive messages during migration: ▪ double-write (to old and new system) during the migration period ▪ Get a list of all recent messages (since last MySQL export) for the user ▪ Load new messages into the migration HBase cluster ▪ Perform the join operations to generate the new data ▪ Export it and upload into the final cluster
  • 36. Facebook Insights Goes Real-Time ▪ Recently launched real-time analytics for social plugins on top of HBase ▪ Publishers get real-time distribution/engagement metrics: ▪ # of impressions, likes ▪ analytics by ▪ Domain, URL, demographics ▪ Over various time periods (the last hour, day, all-time) ▪ Makes use of HBase capabilities like: ▪ Efficient counters (read-modify-write increment operations) ▪ TTL for purging old data
  • 37. Future Work It is still early days…! ▪ Namenode HA (AvatarNode) ▪ Fast hot-backups (Export/Import) ▪ Online schema & config changes ▪ Running HBase as a service (multi-tenancy) ▪ Features (like secondary indices, batching hybrid mutations) ▪ Cross-DC replication ▪ Lot more performance/availability improvements
  • 39. 杭州站 · 2011年10月20日~22日 www.qconhangzhou.com(6月启动) QCon北京站官方网站和资料下载 www.qconbeijing.com