Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Hadoop: Distributed Data Processing

13 185 vues

Publié le

This is an updated version of Amr's Hadoop presentation. Amr gave this talk recently at NASA CIDU event, TDWI LA Chapter, and also Netflix HQ. You should watch the powerpoint version as it has animations. The slides also include handout notes with additional information.

Publié dans : Technologie

Hadoop: Distributed Data Processing

  1. 1.
  2. 2. Outline<br />Scaling for Large Data Processing<br />What is Hadoop?<br />HDFS and MapReduce<br />Hadoop Ecosystem<br />Hadoop vsRDBMSes<br />Conclusion<br />
  3. 3. Current Storage Systems Can’t Compute<br />Ad hoc Queries &<br />Data Mining<br />Interactive Apps<br />RDBMS (200GB/day)<br />ETL Grid<br />Non-Consumption<br />Filer heads are a bottleneck<br />Storage Farm for Unstructured Data (20TB/day)<br />Mostly Append<br />Collection<br />Instrumentation<br />
  4. 4. The Solution: A Store-Compute Grid<br />Interactive Apps<br />“Batch” Apps<br />RDBMS<br />Ad hoc Queries<br />& Data Mining<br />ETL and Aggregations<br />Storage + Computation<br />Mostly Append<br />Collection<br />Instrumentation<br />
  5. 5. What is Hadoop?<br />A scalable fault-tolerant grid operating system for data storage and processing<br />Its scalability comes from the marriage of:<br />HDFS: Self-Healing High-Bandwidth Clustered Storage<br />MapReduce: Fault-Tolerant Distributed Processing<br />Operates on unstructured and structured data<br />A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)<br />Open source under the friendly Apache License<br />http://wiki.apache.org/hadoop/<br />
  6. 6. Hadoop History<br />2002-2004: Doug Cutting and Mike Cafarella started working on Nutch<br />2003-2004: Google publishes GFS and MapReduce papers <br />2004: Cutting adds DFS & MapReduce support to Nutch<br />2006: Yahoo! hires Cutting, Hadoop spins out of Nutch<br />2007: NY Times converts 4TB of archives over 100 EC2s<br />2008: Web-scale deployments at Y!, Facebook, Last.fm<br />April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes<br />May 2009:<br />Yahoo does fastest sort of a TB, 62secs over 1460 nodes<br />Yahoo sorts a PB in 16.25hours over 3658 nodes<br />June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500)<br />September 2009: Doug Cutting joins Cloudera<br />
  7. 7. Hadoop Design Axioms<br />System Shall Manage and Heal Itself<br />Performance Shall Scale Linearly <br />Compute Should Move to Data<br />Simple Core, Modular and Extensible<br />
  8. 8. HDFS: Hadoop Distributed File System<br />Block Size = 64MB<br />Replication Factor = 3<br />Cost/GB is a few ¢/month vs $/month<br />
  9. 9. MapReduce: Distributed Processing<br />
  10. 10. MapReduce Example for Word Count<br />SELECT word, COUNT(1) FROM docs GROUP BY word;<br />cat *.txt | mapper.pl | sort | reducer.pl &gt; out.txt<br />(docid, text)<br />(words, counts)<br />Map 1<br />(sorted words, counts)<br />Reduce 1<br />Output File 1<br />(sorted words, sum of counts)<br />Split 1<br />Be, 5<br />“To Be Or Not To Be?”<br />Be, 30<br />Be, 12<br />Reduce i<br />Output File i<br />(sorted words, sum of counts)<br />(docid, text)<br />Map i<br />Split i<br />Be, 7<br />Be, 6<br />Shuffle<br />Reduce R<br />Output File R<br />(sorted words, sum of counts)<br />(docid, text)<br />Map M<br />(sorted words, counts)<br />(words, counts)<br />Split N<br />
  11. 11. Hadoop High-Level Architecture<br />Hadoop Client<br />Contacts Name Node for data <br />or Job Tracker to submit jobs<br />Name Node<br />Maintains mapping of file blocks <br />to data node slaves<br />Job Tracker<br />Schedules jobs across <br />task tracker slaves<br />Data Node<br />Stores and serves blocks of data<br />Task Tracker<br />Runs tasks (work units) <br />within a job<br />Share Physical Node<br />
  12. 12. Apache Hadoop Ecosystem<br />BI Reporting<br />ETL Tools<br />RDBMS<br />Hive (SQL)<br />Sqoop<br />Pig (Data Flow)<br />MapReduce (Job Scheduling/Execution System)<br />(Streaming/Pipes APIs)<br />HBase(key-value store)<br />Avro (Serialization)<br />Zookeepr (Coordination)<br />HDFS(Hadoop Distributed File System)<br />
  13. 13. Relational Databases:<br />Hadoop:<br />Use The Right Tool For The Right Job <br />When to use?<br /><ul><li>Affordable Storage/Compute
  14. 14. Structured or Not (Agility)
  15. 15. Resilient Auto Scalability</li></ul>When to use?<br /><ul><li>Interactive Reporting (<1sec)
  16. 16. Multistep Transactions
  17. 17. Interoperability</li></li></ul><li>Economics of Hadoop<br />Typical Hardware:<br />Two Quad Core Nehalems<br />24GB RAM<br />12 * 1TB SATA disks (JBOD mode, no need for RAID)<br />1 Gigabit Ethernet card<br />Cost/node: $5K/node<br />Effective HDFS Space:<br />¼ reserved for temp shuffle space, which leaves 9TB/node<br />3 way replication leads to 3TB effective HDFS space/node<br />But assuming 7x compression that becomes ~ 20TB/node<br />Effective Cost per user TB: $250/TB<br />Other solutions cost in the range of $5K to $100K per user TB<br />
  18. 18. Sample Talks from Hadoop World ‘09<br />VISA: Large Scale Transaction Analysis<br />JP Morgan Chase: Data Processing for Financial Services<br />China Mobile: Data Mining Platform for Telecom Industry<br />Rackspace: Cross Data Center Log Processing<br />Booz Allen Hamilton: Protein Alignment using Hadoop<br />eHarmony: Matchmaking in the Hadoop Cloud<br />General Sentiment: Understanding Natural Language<br />Yahoo!: Social Graph Analysis<br />Visible Technologies: Real-Time Business Intelligence<br />Facebook: Rethinking the Data Warehouse with Hadoop and Hive<br />Slides and Videos at http://www.cloudera.com/hadoop-world-nyc<br />
  19. 19. Cloudera Desktop<br />
  20. 20. Conclusion<br />Hadoop is a data grid operating system which provides an economically scalable solution for storing and processing large amounts of unstructured or structured data over long periods of time.<br />
  21. 21. Contact Information<br />AmrAwadallah<br />CTO, Cloudera Inc.<br />aaa@cloudera.com<br />http://twitter.com/awadallah<br />Online Training Videos and Info:<br />http://cloudera.com/hadoop-training<br />http://cloudera.com/blog<br />http://twitter.com/cloudera<br />

×