SlideShare une entreprise Scribd logo
1  sur  39
Télécharger pour lire hors ligne
Facebook Messages & HBase
                Nicolas Spiegelberg
                Software Engineer, Facebook




April 8, 2011
Talk Outline

▪   About Facebook Messages
▪   Intro to HBase
▪   Why HBase
▪   HBase Contributions
▪   MySQL -> HBase Migration
▪   Future Plans
▪   Q&A
Facebook Messages
The New Facebook Messages

 Messages   Chats    Emails   SMS
Monthly data volume prior to launch


           15B x 1,024 bytes = 14TB



           120B x 100 bytes = 11TB
Messaging Data
▪   Small/medium sized data              HBase
    ▪   Message metadata & indices
    ▪   Search index
    ▪   Small message bodies


▪   Attachments and large messages                Haystack
    ▪   Used for our existing photo/video store
Open Source Stack
▪   Memcached   -->   App Server Cache
▪   ZooKeeper   -->   Small Data Coordination Service
▪   HBase             -->   Database Storage Engine
▪   HDFS        -->   Distributed FileSystem
▪   Hadoop      -->   Asynchronous Map-Reduce Jobs
Our architecture
                                                           User Directory Service
           Clients
   (Front End, MTA, etc.)
                            What’s the cell for
                               this user?
                                                  Cell 2
        Cell 1                                                   Cell 3

                                  Cell 1   Application Server
 Application Server                                  Application Server

                                                  Attachments
                                             HBase/HDFS/Z
  HBase/HDFS/Z                               KMessage, Metadata,
                                                       HBase/HDFS/Z
  K                                             Search Index
                                                       K


                            Haystack
About HBase
HBase in a nutshell

• distributed, large-scale data store

• efficient at random reads/writes

• initially modeled after Google’s BigTable

• open source project (Apache)
When to use HBase?

▪ storing large amounts of data

▪ need high write throughput

▪ need efficient random access within large data sets

▪ need to scale gracefully with data

▪ for structured and semi-structured data

▪ don’t need full RDMS capabilities (cross table transactions, joins, etc.)
HBase Data Model
• An HBase table is:
 •   a sparse , three-dimensional array of cells, indexed by:
      RowKey, ColumnKey, Timestamp/Version
 •   sharded into regions along an ordered RowKey space

• Within each region:
 •   Data is grouped into column families
 ▪   Sort order within each column family:

      Row Key (asc), Column Key (asc), Timestamp (desc)
Example: Inbox Search
• Schema
 •   Key: RowKey: userid, Column: word, Version: MessageID
 •   Value: Auxillary info (like offset of word in message)

• Data is stored sorted by <userid, word, messageID>:

      User1:hi:17->offset1
                                 Can efficiently handle queries like:
      User1:hi:16->offset2
      User1:hello:16->offset3    - Get top N messageIDs for a
      User1:hello:2->offset4       specific user & word
      ...
      User2:....                 - Typeahead query: for a given user,
      User2:...                    get words that match a prefix
      ...
HBase System Overview
                             Database Layer

   HBASE
       Master      Backup
                   Master
   Region           Region                    Region    ...
   Server           Server                    Server
                                                              Coordination Service
            Storage Layer

HDFS                                                   Zookeeper Quorum
   Namenode          Secondary Namenode                ZK      ZK           ...
                                                       Peer    Peer
Datanode Datanode           Datanode          ...
HBase Overview
HBASE Region Server
                ....
            Region #2
       Region #1
                        ....
                  ColumnFamily #2

            ColumnFamily #1                Memstore
                                    (in memory data structure)


                HFiles (in HDFS)                             flush




  Write Ahead Log ( in HDFS)
HBase Overview
•       Very good at random reads/writes

•       Write path
    •    Sequential write/sync to commit log
    •    update memstore

•       Read path
    •    Lookup memstore & persistent HFiles
    •    HFile data is sorted and has a block index for efficient retrieval

•       Background chores
    •    Flushes (memstore -> HFile)
    •    Compactions (group of HFiles merged into one)
Why HBase?
Performance is great, but what else…
Horizontal scalability
▪   HBase & HDFS are elastic by design
▪   Multiple table shards (regions) per physical server
▪   On node additions
    ▪   Load balancer automatically reassigns shards from overloaded
        nodes to new nodes
    ▪   Because filesystem underneath is itself distributed, data for
        reassigned regions is instantly servable from the new nodes.
▪   Regions can be dynamically split into smaller regions.
    ▪   Pre-sharding is not necessary
    ▪   Splits are near instantaneous!
Automatic Failover
▪   Node failures automatically detected by HBase Master
▪   Regions on failed node are distributed evenly among surviving nodes.
        ▪   Multiple regions/server model avoids need for substantial
            overprovisioning
▪   HBase Master failover
    ▪   1 active, rest standby
    ▪   When active master fails, a standby automatically takes over
HBase uses HDFS
We get the benefits of HDFS as a storage system for free
▪   Fault tolerance (block level replication for redundancy)
▪   Scalability
▪   End-to-end checksums to detect and recover from corruptions
▪   Map Reduce for large scale data processing
▪   HDFS already battle tested inside Facebook
    ▪   running petabyte scale clusters
    ▪   lot of in-house development and operational experience
Simpler Consistency Model
▪   HBase’s strong consistency model
     ▪   simpler for a wide variety of applications to deal with
     ▪   client gets same answer no matter which replica data is read from


▪   Eventual consistency: tricky for applications fronted by a cache
     ▪   replicas may heal eventually during failures
     ▪   but stale data could remain stuck in cache
Typical Cluster Layout
 ▪   Multiple clusters/cells for messaging
     ▪   20 servers/rack; 5 or more racks per cluster
 ▪   Controllers (master/Zookeeper) spread across racks
 ZooKeeper Peer       ZooKeeper Peer     ZooKeeper Peer    ZooKeeper Peer    ZooKeeper Peer
 HDFS Namenode        Backup Namenode    Job Tracker       Hbase Master      Backup Master

Region Server        Region Server      Region Server     Region Server     Region Server
Data Node            Data Node          Data Node         Data Node         Data Node
Task Tracker         Task Tracker       Task Tracker      Task Tracker      Task Tracker



19x...                19x...            19x...            19x...            19x...


Region Server        Region Server      Region Server     Region Server     Region Server
Data Node            Data Node          Data Node         Data Node         Data Node
Task Tracker         Task Tracker       Task Tracker      Task Tracker      Task Tracker

         Rack #1          Rack #2            Rack #3           Rack #4           Rack #5
HBase Enhancements
Goal: Zero Data Loss
Goal of Zero Data Loss/Correctness
▪   sync support added to hadoop-20 branch
    ▪   for keeping transaction log (WAL) in HDFS
    ▪   to guarantee durability of transactions
▪   Row-level ACID compliance
▪   Enhanced HDFS’s Block Placement Policy:
    ▪   Original: rack aware, but minimally constrained
    ▪   Now: Placement of replicas constrained to configurable node groups
    ▪   Result: Data loss probability reduced by orders of magnitude
Availability/Stability improvements
▪   HBase master rewrite- region assignments using ZK
▪   Rolling Restarts – doing software upgrades without a downtime
▪   Interrupt Compactions – prioritize availability over minor perf gains
▪   Timeouts on client-server RPCs
▪   Staggered major compaction to avoid compaction storms
Performance Improvements
▪   Compactions
    ▪   critical for read performance
    ▪   Improved compaction algo
    ▪   delete/TTL/overwrite processing in minor compactions
▪   Read optimizations:
    ▪   Seek optimizations for rows with large number of cells
    ▪   Bloom filters to minimize HFile lookups
    ▪   Timerange hints on HFiles (great for temporal data)
    ▪   Improved handling of compressed HFiles
Operational Experiences
▪   Darklaunch:
    ▪   shadow traffic on test clusters for continuous, at scale testing
    ▪   experiment/tweak knobs
    ▪   simulate failures, test rolling upgrades
▪   Constant (pre-sharding) region count & controlled rolling splits
▪   Administrative tools and monitoring
    ▪   Alerts (HBCK, memory alerts, perf alerts, health alerts)
    ▪   auto detecting/decommissioning misbehaving machines
    ▪   Dashboards
▪   Application level backup/recovery pipeline
Working within the Apache community
▪   Growing with the community
    ▪   Started with a stable, healthy project
    ▪   In house expertise in both HDFS and HBase
    ▪   Increasing community involvement
▪   Undertook massive feature improvements with community help
    ▪   HDFS 0.20-append branch
    ▪   HBase Master rewrite
▪   Continually interacting with the community to identify and fix issues
    ▪   e.g., large responses (2GB RPC)
Data migration
Another place we used HBase heavily…
Move messaging data from MySQL to
HBase
Move messaging data from MySQL to HBase
▪   In MySQL, inbox data was kept normalized
    ▪   user’s messages are stored across many different machines
▪   Migrating a user is basically one big join across tables spread over
    many different machines
▪   Multiple terabytes of data (for over 500M users)
▪   Cannot pound 1000s of production UDBs to migrate users
How we migrated
▪   Periodically, get a full export of all the users’ inbox data in MySQL
▪   And, use bulk loader to import the above into a migration HBase
    cluster
▪   To migrate users:
    ▪   Since users may continue to receive messages during migration:
        ▪   double-write (to old and new system) during the migration period
    ▪   Get a list of all recent messages (since last MySQL export) for the
        user
        ▪   Load new messages into the migration HBase cluster
        ▪   Perform the join operations to generate the new data
        ▪   Export it and upload into the final cluster
Future Plans
HBase Expands
Facebook Insights Goes Real-Time
▪   Recently launched real-time analytics for social plugins on top of
    HBase
▪   Publishers get real-time distribution/engagement metrics:
        ▪   # of impressions, likes
        ▪   analytics by
            ▪   Domain, URL, demographics
            ▪   Over various time periods (the last hour, day, all-time)
▪   Makes use of HBase capabilities like:
    ▪   Efficient counters (read-modify-write increment operations)
    ▪   TTL for purging old data
Future Work
It is still early days…!
▪   Namenode HA (AvatarNode)
▪   Fast hot-backups (Export/Import)
▪   Online schema & config changes
▪   Running HBase as a service (multi-tenancy)
▪   Features (like secondary indices, batching hybrid mutations)
▪   Cross-DC replication
▪   Lot more performance/availability improvements
Thanks! Questions?
facebook.com/engineering
杭州站 · 2011年10月20日~22日
 www.qconhangzhou.com(6月启动)




QCon北京站官方网站和资料下载
     www.qconbeijing.com

Contenu connexe

Tendances

The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...
The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...
The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...Databricks
 
GCP Meetup #3 - Approaches to Cloud Native Architectures
GCP Meetup #3 - Approaches to Cloud Native ArchitecturesGCP Meetup #3 - Approaches to Cloud Native Architectures
GCP Meetup #3 - Approaches to Cloud Native Architecturesnine
 
Hadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and FutureHadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and FutureDataWorks Summit
 
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...DataStax Academy
 
Maintaining Low Latency While Maximizing Throughput on a Single Cluster
Maintaining Low Latency While Maximizing Throughput on a Single ClusterMaintaining Low Latency While Maximizing Throughput on a Single Cluster
Maintaining Low Latency While Maximizing Throughput on a Single ClusterMapR Technologies
 
Hd insight essentials quick view
Hd insight essentials quick viewHd insight essentials quick view
Hd insight essentials quick viewRajesh Nadipalli
 
Spark and Spark Streaming
Spark and Spark StreamingSpark and Spark Streaming
Spark and Spark Streaming宇 傅
 
Application architectures with Hadoop – Big Data TechCon 2014
Application architectures with Hadoop – Big Data TechCon 2014Application architectures with Hadoop – Big Data TechCon 2014
Application architectures with Hadoop – Big Data TechCon 2014hadooparchbook
 
Building a Business on Hadoop, HBase, and Open Source Distributed Computing
Building a Business on Hadoop, HBase, and Open Source Distributed ComputingBuilding a Business on Hadoop, HBase, and Open Source Distributed Computing
Building a Business on Hadoop, HBase, and Open Source Distributed ComputingBradford Stephens
 
Cloudera Impala + PostgreSQL
Cloudera Impala + PostgreSQLCloudera Impala + PostgreSQL
Cloudera Impala + PostgreSQLliuknag
 
Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...
Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...
Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...DataStax
 
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...DataWorks Summit/Hadoop Summit
 
Data Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for HadoopData Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for HadoopGwen (Chen) Shapira
 
Unified Batch & Stream Processing with Apache Samza
Unified Batch & Stream Processing with Apache SamzaUnified Batch & Stream Processing with Apache Samza
Unified Batch & Stream Processing with Apache SamzaDataWorks Summit
 
Exponea - Kafka and Hadoop as components of architecture
Exponea  - Kafka and Hadoop as components of architectureExponea  - Kafka and Hadoop as components of architecture
Exponea - Kafka and Hadoop as components of architectureMartinStrycek
 
Architectural considerations for Hadoop Applications
Architectural considerations for Hadoop ApplicationsArchitectural considerations for Hadoop Applications
Architectural considerations for Hadoop Applicationshadooparchbook
 

Tendances (20)

The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...
The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...
The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Ser...
 
GCP Meetup #3 - Approaches to Cloud Native Architectures
GCP Meetup #3 - Approaches to Cloud Native ArchitecturesGCP Meetup #3 - Approaches to Cloud Native Architectures
GCP Meetup #3 - Approaches to Cloud Native Architectures
 
Hadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and FutureHadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and Future
 
Apache drill
Apache drillApache drill
Apache drill
 
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...
 
Apache Spark & Hadoop
Apache Spark & HadoopApache Spark & Hadoop
Apache Spark & Hadoop
 
Maintaining Low Latency While Maximizing Throughput on a Single Cluster
Maintaining Low Latency While Maximizing Throughput on a Single ClusterMaintaining Low Latency While Maximizing Throughput on a Single Cluster
Maintaining Low Latency While Maximizing Throughput on a Single Cluster
 
Hd insight essentials quick view
Hd insight essentials quick viewHd insight essentials quick view
Hd insight essentials quick view
 
Spark and Spark Streaming
Spark and Spark StreamingSpark and Spark Streaming
Spark and Spark Streaming
 
Application architectures with Hadoop – Big Data TechCon 2014
Application architectures with Hadoop – Big Data TechCon 2014Application architectures with Hadoop – Big Data TechCon 2014
Application architectures with Hadoop – Big Data TechCon 2014
 
Building a Business on Hadoop, HBase, and Open Source Distributed Computing
Building a Business on Hadoop, HBase, and Open Source Distributed ComputingBuilding a Business on Hadoop, HBase, and Open Source Distributed Computing
Building a Business on Hadoop, HBase, and Open Source Distributed Computing
 
Cloudera Impala + PostgreSQL
Cloudera Impala + PostgreSQLCloudera Impala + PostgreSQL
Cloudera Impala + PostgreSQL
 
Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...
Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...
Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...
 
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
 
Data Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for HadoopData Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for Hadoop
 
Apache Drill
Apache DrillApache Drill
Apache Drill
 
Unified Batch & Stream Processing with Apache Samza
Unified Batch & Stream Processing with Apache SamzaUnified Batch & Stream Processing with Apache Samza
Unified Batch & Stream Processing with Apache Samza
 
Apache HAWQ Architecture
Apache HAWQ ArchitectureApache HAWQ Architecture
Apache HAWQ Architecture
 
Exponea - Kafka and Hadoop as components of architecture
Exponea  - Kafka and Hadoop as components of architectureExponea  - Kafka and Hadoop as components of architecture
Exponea - Kafka and Hadoop as components of architecture
 
Architectural considerations for Hadoop Applications
Architectural considerations for Hadoop ApplicationsArchitectural considerations for Hadoop Applications
Architectural considerations for Hadoop Applications
 

Similaire à Facebook keynote-nicolas-qcon

Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messagesyarapavan
 
[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)baggioss
 
Hw09 Practical HBase Getting The Most From Your H Base Install
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base InstallCloudera, Inc.
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfsNAVER D2
 
Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012StampedeCon
 
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...Yahoo Developer Network
 
Data Storage and Management project Report
Data Storage and Management project ReportData Storage and Management project Report
Data Storage and Management project ReportTushar Dalvi
 
Large-scale Web Apps @ Pinterest
Large-scale Web Apps @ PinterestLarge-scale Web Apps @ Pinterest
Large-scale Web Apps @ PinterestHBaseCon
 
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPERCCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPERKrishnaVeni451953
 
Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Cloudera, Inc.
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryCloudera, Inc.
 
Hbase status quo apache-con europe - nov 2012
Hbase status quo   apache-con europe - nov 2012Hbase status quo   apache-con europe - nov 2012
Hbase status quo apache-con europe - nov 2012Chris Huang
 

Similaire à Facebook keynote-nicolas-qcon (20)

Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messages
 
[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)
 
Hbase 20141003
Hbase 20141003Hbase 20141003
Hbase 20141003
 
Hbase: an introduction
Hbase: an introductionHbase: an introduction
Hbase: an introduction
 
Hbase
HbaseHbase
Hbase
 
Hw09 Practical HBase Getting The Most From Your H Base Install
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base Install
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 
Apache HBase™
Apache HBase™Apache HBase™
Apache HBase™
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfs
 
Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012
 
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
 
Data Storage and Management project Report
Data Storage and Management project ReportData Storage and Management project Report
Data Storage and Management project Report
 
Large-scale Web Apps @ Pinterest
Large-scale Web Apps @ PinterestLarge-scale Web Apps @ Pinterest
Large-scale Web Apps @ Pinterest
 
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPERCCS334 BIG DATA ANALYTICS UNIT 5 PPT  ELECTIVE PAPER
CCS334 BIG DATA ANALYTICS UNIT 5 PPT ELECTIVE PAPER
 
Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010
 
Apache hadoop hbase
Apache hadoop hbaseApache hadoop hbase
Apache hadoop hbase
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster Recovery
 
Hbase status quo apache-con europe - nov 2012
Hbase status quo   apache-con europe - nov 2012Hbase status quo   apache-con europe - nov 2012
Hbase status quo apache-con europe - nov 2012
 
Big data Hadoop
Big data  Hadoop   Big data  Hadoop
Big data Hadoop
 

Plus de Yiwei Ma

Cibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconCibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconYiwei Ma
 
Cibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconCibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconYiwei Ma
 
Taobao casestudy-yufeng-qcon
Taobao casestudy-yufeng-qconTaobao casestudy-yufeng-qcon
Taobao casestudy-yufeng-qconYiwei Ma
 
Alibaba server-zhangxuseng-qcon
Alibaba server-zhangxuseng-qconAlibaba server-zhangxuseng-qcon
Alibaba server-zhangxuseng-qconYiwei Ma
 
Zhongxing practice-suchunshan-qcon
Zhongxing practice-suchunshan-qconZhongxing practice-suchunshan-qcon
Zhongxing practice-suchunshan-qconYiwei Ma
 
Taobao practice-liyu-qcon
Taobao practice-liyu-qconTaobao practice-liyu-qcon
Taobao practice-liyu-qconYiwei Ma
 
Thoughtworks practice-hukai-qcon
Thoughtworks practice-hukai-qconThoughtworks practice-hukai-qcon
Thoughtworks practice-hukai-qconYiwei Ma
 
Ufida design-chijianqiang-qcon
Ufida design-chijianqiang-qconUfida design-chijianqiang-qcon
Ufida design-chijianqiang-qconYiwei Ma
 
Spring design-juergen-qcon
Spring design-juergen-qconSpring design-juergen-qcon
Spring design-juergen-qconYiwei Ma
 
Netflix web-adrian-qcon
Netflix web-adrian-qconNetflix web-adrian-qcon
Netflix web-adrian-qconYiwei Ma
 
Google arch-fangkun-qcon
Google arch-fangkun-qconGoogle arch-fangkun-qcon
Google arch-fangkun-qconYiwei Ma
 
Cibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconCibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconYiwei Ma
 
Alibaba arch-jiangtao-qcon
Alibaba arch-jiangtao-qconAlibaba arch-jiangtao-qcon
Alibaba arch-jiangtao-qconYiwei Ma
 
Twitter keynote-evan-qcon
Twitter keynote-evan-qconTwitter keynote-evan-qcon
Twitter keynote-evan-qconYiwei Ma
 
Netflix keynote-adrian-qcon
Netflix keynote-adrian-qconNetflix keynote-adrian-qcon
Netflix keynote-adrian-qconYiwei Ma
 
Domainlang keynote-eric-qcon
Domainlang keynote-eric-qconDomainlang keynote-eric-qcon
Domainlang keynote-eric-qconYiwei Ma
 
Devjam keynote-david-qcon
Devjam keynote-david-qconDevjam keynote-david-qcon
Devjam keynote-david-qconYiwei Ma
 
Baidu keynote-wubo-qcon
Baidu keynote-wubo-qconBaidu keynote-wubo-qcon
Baidu keynote-wubo-qconYiwei Ma
 
淘宝线上线下性能跟踪体系和容量规划-Qcon2011
淘宝线上线下性能跟踪体系和容量规划-Qcon2011淘宝线上线下性能跟踪体系和容量规划-Qcon2011
淘宝线上线下性能跟踪体系和容量规划-Qcon2011Yiwei Ma
 
网游服务器性能优化-Qcon2011
网游服务器性能优化-Qcon2011网游服务器性能优化-Qcon2011
网游服务器性能优化-Qcon2011Yiwei Ma
 

Plus de Yiwei Ma (20)

Cibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconCibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qcon
 
Cibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconCibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qcon
 
Taobao casestudy-yufeng-qcon
Taobao casestudy-yufeng-qconTaobao casestudy-yufeng-qcon
Taobao casestudy-yufeng-qcon
 
Alibaba server-zhangxuseng-qcon
Alibaba server-zhangxuseng-qconAlibaba server-zhangxuseng-qcon
Alibaba server-zhangxuseng-qcon
 
Zhongxing practice-suchunshan-qcon
Zhongxing practice-suchunshan-qconZhongxing practice-suchunshan-qcon
Zhongxing practice-suchunshan-qcon
 
Taobao practice-liyu-qcon
Taobao practice-liyu-qconTaobao practice-liyu-qcon
Taobao practice-liyu-qcon
 
Thoughtworks practice-hukai-qcon
Thoughtworks practice-hukai-qconThoughtworks practice-hukai-qcon
Thoughtworks practice-hukai-qcon
 
Ufida design-chijianqiang-qcon
Ufida design-chijianqiang-qconUfida design-chijianqiang-qcon
Ufida design-chijianqiang-qcon
 
Spring design-juergen-qcon
Spring design-juergen-qconSpring design-juergen-qcon
Spring design-juergen-qcon
 
Netflix web-adrian-qcon
Netflix web-adrian-qconNetflix web-adrian-qcon
Netflix web-adrian-qcon
 
Google arch-fangkun-qcon
Google arch-fangkun-qconGoogle arch-fangkun-qcon
Google arch-fangkun-qcon
 
Cibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qconCibank arch-zhouweiran-qcon
Cibank arch-zhouweiran-qcon
 
Alibaba arch-jiangtao-qcon
Alibaba arch-jiangtao-qconAlibaba arch-jiangtao-qcon
Alibaba arch-jiangtao-qcon
 
Twitter keynote-evan-qcon
Twitter keynote-evan-qconTwitter keynote-evan-qcon
Twitter keynote-evan-qcon
 
Netflix keynote-adrian-qcon
Netflix keynote-adrian-qconNetflix keynote-adrian-qcon
Netflix keynote-adrian-qcon
 
Domainlang keynote-eric-qcon
Domainlang keynote-eric-qconDomainlang keynote-eric-qcon
Domainlang keynote-eric-qcon
 
Devjam keynote-david-qcon
Devjam keynote-david-qconDevjam keynote-david-qcon
Devjam keynote-david-qcon
 
Baidu keynote-wubo-qcon
Baidu keynote-wubo-qconBaidu keynote-wubo-qcon
Baidu keynote-wubo-qcon
 
淘宝线上线下性能跟踪体系和容量规划-Qcon2011
淘宝线上线下性能跟踪体系和容量规划-Qcon2011淘宝线上线下性能跟踪体系和容量规划-Qcon2011
淘宝线上线下性能跟踪体系和容量规划-Qcon2011
 
网游服务器性能优化-Qcon2011
网游服务器性能优化-Qcon2011网游服务器性能优化-Qcon2011
网游服务器性能优化-Qcon2011
 

Facebook keynote-nicolas-qcon

  • 1. Facebook Messages & HBase Nicolas Spiegelberg Software Engineer, Facebook April 8, 2011
  • 2. Talk Outline ▪ About Facebook Messages ▪ Intro to HBase ▪ Why HBase ▪ HBase Contributions ▪ MySQL -> HBase Migration ▪ Future Plans ▪ Q&A
  • 4. The New Facebook Messages Messages Chats Emails SMS
  • 5.
  • 6. Monthly data volume prior to launch 15B x 1,024 bytes = 14TB 120B x 100 bytes = 11TB
  • 7. Messaging Data ▪ Small/medium sized data HBase ▪ Message metadata & indices ▪ Search index ▪ Small message bodies ▪ Attachments and large messages Haystack ▪ Used for our existing photo/video store
  • 8. Open Source Stack ▪ Memcached --> App Server Cache ▪ ZooKeeper --> Small Data Coordination Service ▪ HBase --> Database Storage Engine ▪ HDFS --> Distributed FileSystem ▪ Hadoop --> Asynchronous Map-Reduce Jobs
  • 9. Our architecture User Directory Service Clients (Front End, MTA, etc.) What’s the cell for this user? Cell 2 Cell 1 Cell 3 Cell 1 Application Server Application Server Application Server Attachments HBase/HDFS/Z HBase/HDFS/Z KMessage, Metadata, HBase/HDFS/Z K Search Index K Haystack
  • 11. HBase in a nutshell • distributed, large-scale data store • efficient at random reads/writes • initially modeled after Google’s BigTable • open source project (Apache)
  • 12. When to use HBase? ▪ storing large amounts of data ▪ need high write throughput ▪ need efficient random access within large data sets ▪ need to scale gracefully with data ▪ for structured and semi-structured data ▪ don’t need full RDMS capabilities (cross table transactions, joins, etc.)
  • 13. HBase Data Model • An HBase table is: • a sparse , three-dimensional array of cells, indexed by: RowKey, ColumnKey, Timestamp/Version • sharded into regions along an ordered RowKey space • Within each region: • Data is grouped into column families ▪ Sort order within each column family: Row Key (asc), Column Key (asc), Timestamp (desc)
  • 14. Example: Inbox Search • Schema • Key: RowKey: userid, Column: word, Version: MessageID • Value: Auxillary info (like offset of word in message) • Data is stored sorted by <userid, word, messageID>: User1:hi:17->offset1 Can efficiently handle queries like: User1:hi:16->offset2 User1:hello:16->offset3 - Get top N messageIDs for a User1:hello:2->offset4 specific user & word ... User2:.... - Typeahead query: for a given user, User2:... get words that match a prefix ...
  • 15. HBase System Overview Database Layer HBASE Master Backup Master Region Region Region ... Server Server Server Coordination Service Storage Layer HDFS Zookeeper Quorum Namenode Secondary Namenode ZK ZK ... Peer Peer Datanode Datanode Datanode ...
  • 16. HBase Overview HBASE Region Server .... Region #2 Region #1 .... ColumnFamily #2 ColumnFamily #1 Memstore (in memory data structure) HFiles (in HDFS) flush Write Ahead Log ( in HDFS)
  • 17. HBase Overview • Very good at random reads/writes • Write path • Sequential write/sync to commit log • update memstore • Read path • Lookup memstore & persistent HFiles • HFile data is sorted and has a block index for efficient retrieval • Background chores • Flushes (memstore -> HFile) • Compactions (group of HFiles merged into one)
  • 18. Why HBase? Performance is great, but what else…
  • 19. Horizontal scalability ▪ HBase & HDFS are elastic by design ▪ Multiple table shards (regions) per physical server ▪ On node additions ▪ Load balancer automatically reassigns shards from overloaded nodes to new nodes ▪ Because filesystem underneath is itself distributed, data for reassigned regions is instantly servable from the new nodes. ▪ Regions can be dynamically split into smaller regions. ▪ Pre-sharding is not necessary ▪ Splits are near instantaneous!
  • 20. Automatic Failover ▪ Node failures automatically detected by HBase Master ▪ Regions on failed node are distributed evenly among surviving nodes. ▪ Multiple regions/server model avoids need for substantial overprovisioning ▪ HBase Master failover ▪ 1 active, rest standby ▪ When active master fails, a standby automatically takes over
  • 21. HBase uses HDFS We get the benefits of HDFS as a storage system for free ▪ Fault tolerance (block level replication for redundancy) ▪ Scalability ▪ End-to-end checksums to detect and recover from corruptions ▪ Map Reduce for large scale data processing ▪ HDFS already battle tested inside Facebook ▪ running petabyte scale clusters ▪ lot of in-house development and operational experience
  • 22. Simpler Consistency Model ▪ HBase’s strong consistency model ▪ simpler for a wide variety of applications to deal with ▪ client gets same answer no matter which replica data is read from ▪ Eventual consistency: tricky for applications fronted by a cache ▪ replicas may heal eventually during failures ▪ but stale data could remain stuck in cache
  • 23. Typical Cluster Layout ▪ Multiple clusters/cells for messaging ▪ 20 servers/rack; 5 or more racks per cluster ▪ Controllers (master/Zookeeper) spread across racks ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer HDFS Namenode Backup Namenode Job Tracker Hbase Master Backup Master Region Server Region Server Region Server Region Server Region Server Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker 19x... 19x... 19x... 19x... 19x... Region Server Region Server Region Server Region Server Region Server Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker Rack #1 Rack #2 Rack #3 Rack #4 Rack #5
  • 26. Goal of Zero Data Loss/Correctness ▪ sync support added to hadoop-20 branch ▪ for keeping transaction log (WAL) in HDFS ▪ to guarantee durability of transactions ▪ Row-level ACID compliance ▪ Enhanced HDFS’s Block Placement Policy: ▪ Original: rack aware, but minimally constrained ▪ Now: Placement of replicas constrained to configurable node groups ▪ Result: Data loss probability reduced by orders of magnitude
  • 27. Availability/Stability improvements ▪ HBase master rewrite- region assignments using ZK ▪ Rolling Restarts – doing software upgrades without a downtime ▪ Interrupt Compactions – prioritize availability over minor perf gains ▪ Timeouts on client-server RPCs ▪ Staggered major compaction to avoid compaction storms
  • 28. Performance Improvements ▪ Compactions ▪ critical for read performance ▪ Improved compaction algo ▪ delete/TTL/overwrite processing in minor compactions ▪ Read optimizations: ▪ Seek optimizations for rows with large number of cells ▪ Bloom filters to minimize HFile lookups ▪ Timerange hints on HFiles (great for temporal data) ▪ Improved handling of compressed HFiles
  • 29. Operational Experiences ▪ Darklaunch: ▪ shadow traffic on test clusters for continuous, at scale testing ▪ experiment/tweak knobs ▪ simulate failures, test rolling upgrades ▪ Constant (pre-sharding) region count & controlled rolling splits ▪ Administrative tools and monitoring ▪ Alerts (HBCK, memory alerts, perf alerts, health alerts) ▪ auto detecting/decommissioning misbehaving machines ▪ Dashboards ▪ Application level backup/recovery pipeline
  • 30. Working within the Apache community ▪ Growing with the community ▪ Started with a stable, healthy project ▪ In house expertise in both HDFS and HBase ▪ Increasing community involvement ▪ Undertook massive feature improvements with community help ▪ HDFS 0.20-append branch ▪ HBase Master rewrite ▪ Continually interacting with the community to identify and fix issues ▪ e.g., large responses (2GB RPC)
  • 31. Data migration Another place we used HBase heavily…
  • 32. Move messaging data from MySQL to HBase
  • 33. Move messaging data from MySQL to HBase ▪ In MySQL, inbox data was kept normalized ▪ user’s messages are stored across many different machines ▪ Migrating a user is basically one big join across tables spread over many different machines ▪ Multiple terabytes of data (for over 500M users) ▪ Cannot pound 1000s of production UDBs to migrate users
  • 34. How we migrated ▪ Periodically, get a full export of all the users’ inbox data in MySQL ▪ And, use bulk loader to import the above into a migration HBase cluster ▪ To migrate users: ▪ Since users may continue to receive messages during migration: ▪ double-write (to old and new system) during the migration period ▪ Get a list of all recent messages (since last MySQL export) for the user ▪ Load new messages into the migration HBase cluster ▪ Perform the join operations to generate the new data ▪ Export it and upload into the final cluster
  • 36. Facebook Insights Goes Real-Time ▪ Recently launched real-time analytics for social plugins on top of HBase ▪ Publishers get real-time distribution/engagement metrics: ▪ # of impressions, likes ▪ analytics by ▪ Domain, URL, demographics ▪ Over various time periods (the last hour, day, all-time) ▪ Makes use of HBase capabilities like: ▪ Efficient counters (read-modify-write increment operations) ▪ TTL for purging old data
  • 37. Future Work It is still early days…! ▪ Namenode HA (AvatarNode) ▪ Fast hot-backups (Export/Import) ▪ Online schema & config changes ▪ Running HBase as a service (multi-tenancy) ▪ Features (like secondary indices, batching hybrid mutations) ▪ Cross-DC replication ▪ Lot more performance/availability improvements
  • 39. 杭州站 · 2011年10月20日~22日 www.qconhangzhou.com(6月启动) QCon北京站官方网站和资料下载 www.qconbeijing.com