Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

Big Data and Cloud Computing

Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Prochain SlideShare
2. hadoop fundamentals
2. hadoop fundamentals
Chargement dans…3
×

Consultez-les par la suite

1 sur 56 Publicité

Plus De Contenu Connexe

Diaporamas pour vous (20)

Similaire à Big Data and Cloud Computing (20)

Publicité

Plus récents (20)

Publicité

Big Data and Cloud Computing

  1. 1. Cloud and Big Data Farzad Nozarian (fnozarian@aut.ac.ir) Amirkabir University of Technology With the help of Dr. Amir H. Payberah (amir@sics.se)
  2. 2. Big Data Analytics Stack
  3. 3. Hadoop Big Data Analytics Stack
  4. 4. Spark Big Data Analytics Stack
  5. 5. Big Data - File systems • Traditional file-systems are not well-designed for large-scale data processing systems. • Efficiency has a higher priority than other features, e.g., directory service. • Massive size of data tends to store it across multiple machines in a distributed way. • HDFS/GFS, Amazon S3, ...
  6. 6. Big Data - Database • Relational Databases Management Systems (RDMS) were not designed to be distributed. • NoSQL databases relax one or more of the ACID properties: BASE • Different data models: key/value, column-family, graph, document. • Hbase/BigTable, Dynamo, Scalaris, Cassandra, MongoDB, Voldemort, Riak, Neo4J, ...
  7. 7. Big Data - Resource Management • Different frameworks require different computing resources. • Large organizations need the ability to share data and resources between multiple frameworks. • Resource management share resources in a cluster between multiple frameworks while providing resource isolation. • Mesos, YARN, Quincy, ...
  8. 8. Big Data - Execution Engine • Scalable and fault tolerance parallel data processing on clusters of unreliable machines. • Data-parallel programming model for clusters of commodity machines. • MapReduce, Spark, Stratosphere, Dryad, Hyracks, ...
  9. 9. Big Data - Query/Scripting Language • Low-level programming of execution engines, e.g., MapReduce, is not easy for end users. • Need high-level language to improve the query capabilities of execution engines. • It translates user-defined functions to low-level API of the execution engines. • Pig, Hive, Shark, Meteor, DryadLINQ, SCOPE, ...
  10. 10. Big Data - Stream Processing • Providing users with fresh and low latency results. • Database Management Systems (DBMS) vs. Data Stream Management Systems (DSMS) • Storm, S4, SEEP, D-Stream, Naiad, ...
  11. 11. Big Data - Graph Processing • Many problems are expressed using graphs: sparse computational dependencies, and multiple iterations to converge. • Data-parallel frameworks, such as MapReduce, are not ideal for these problems: slow • Graph processing frameworks are optimized for graph-based problems. • Pregel, Giraph, GraphX, GraphLab, PowerGraph, GraphChi, ...
  12. 12. Big Data - Machine Learning • Implementing and consuming machine learning techniques at scale are difficult tasks for developers and end users. • There exist platforms that address it by providing scalable machine learning and data mining libraries. • Mahout, MLBase, SystemML, Ricardo, Presto, ...
  13. 13. Big Data - Configuration and Synchronization Service • A means to synchronize distributed applications accesses to shared resources. • Allows distributed processes to coordinate with each other. • Zookeeper, Chubby, ...
  14. 14. Hadoop Ecosystem
  15. 15. Hadoop Ecosystem-HDFS • A foundational component of the Hadoop ecosystem is the Hadoop Distributed File System (HDFS). • HDFS is the mechanism by which a large amount of data can be distributed over a cluster of computers, and data is written once, but read many times for analytics. • It provides the foundation for other tools, such as HBase.
  16. 16. Hadoop Ecosystem-MapReduce • Hadoop’s main execution framework is MapReduce, a programming model for distributed, parallel data processing, breaking jobs into mapping phases and reduce phases. • Developers write MapReduce jobs for Hadoop, using data stored in HDFS for fast data access. • Because of the nature of how MapReduce works, Hadoop brings the processing to the data in a parallel fashion, resulting in fast implementation.
  17. 17. Hadoop Ecosystem-HBase • A column-oriented NoSQL database built on top of HDFS, HBase is used for fast read/write access to large amounts of data. • HBase uses Zookeeper for its management to ensure that all of its components are up and running.
  18. 18. Hadoop Ecosystem-Zookeeper • Zookeeper is Hadoop’s distributed coordination service. • Designed to run over a cluster of machines, it is a highly available service used for the management of Hadoop operations, and many components of Hadoop depend on it.
  19. 19. Hadoop Ecosystem-Oozie • A scalable workflow system • Oozie is integrated into the Hadoop stack, and is used to coordinate execution of multiple MapReduce jobs. • It is capable of managing a significant amount of complexity, basing execution on external events that include timing and presence of required data.
  20. 20. Hadoop Ecosystem-Pig • An abstraction over the complexity of MapReduce programming • the Pig platform includes an execution environment and a scripting language (Pig Latin) used to analyze Hadoop data sets. • Its compiler translates Pig Latin into sequences of MapReduce programs.
  21. 21. Hadoop Ecosystem-Hive • An SQL-like, high-level language used to run queries on data stored in Hadoop • Hive enables developers not familiar with MapReduce to write data queries that are translated into MapReduce jobs in Hadoop.
  22. 22. Hadoop Ecosystem-Sqoop • a connectivity tool for moving data between relational databases and data warehouses and Hadoop. • Sqoop leverages database to describe the schema for the imported/exported data and MapReduce for parallelization operation and fault tolerance.
  23. 23. Hadoop Ecosystem-Flume • a distributed, reliable, and highly available service for efficiently collecting, aggregating, and moving large amounts of data from individual machines to HDFS. • provides a streaming of data flows • allowing to move data from multiple machines within an enterprise into Hadoop.
  24. 24. Beyond the core components • Whirr — This is a set of libraries that allows users to easily spin-up Hadoop clusters on top of Amazon EC2, Rackspace, or any virtual infrastructure. • Mahout — This is a machine-learning and data-mining library that provides MapReduce implementations for popular algorithms used for clustering, regression testing, and statistical modeling. • BigTop — This is a formal process and framework for packaging and interoperability testing of Hadoop’s sub-projects and related components. • Ambari — This is a project aimed at simplifying Hadoop management by providing support for provisioning, managing, and monitoring Hadoop clusters.
  25. 25. Storing Data in Hadoop HDFS - HBase
  26. 26. HDFS-Architecture • The HDFS design is based on the design of the Google File System (GFS). • To be able to store a very large amount of data (terabytes or petabytes) • HDFS is designed to spread the data across a large number of machines, and to support much larger file sizes compared to distributed filesystems such as NFS. • HDFS uses data replication • To better integrate with Hadoop’s MapReduce, HDFS allows data to be read and processed locally.
  27. 27. HDFS-Architecture HDFS is implemented as a block-structured file system
  28. 28. HDFS-Using HDFS Files • User applications access the HDFS file system using an HDFS client • ACCESSING HDFS • FileSystem (FS) shell • HDFS Java APIs
  29. 29. HDFS-Using HDFS Files
  30. 30. HBase-Architecture • HBase is a distributed, versioned, column-oriented, multidimensional storage system, designed for high performance and high availability. • HBase is an open source implementation of Google’s BigTable architecture. • Similar to traditional relational database management systems (RDBMSs), data in HBase is organized in tables. • Unlike RDBMSs, however, HBase supports a very loose schema definition, and does not provide any joins, query language, or SQL. • The main focus of HBase is on Create, Read, Update, and Delete (CRUD) operations on wide sparse tables. • HBase leverages HDFS for its persistent data storage.
  31. 31. Processing Data with MapReduce
  32. 32. MapReduce-Roadmap • Understanding MapReduce fundamentals • Getting to know MapReduce application execution • Understanding MapReduce application design
  33. 33. MAPREDUCE-GETTING TO KNOW • MapReduce is a framework for executing highly parallelizable and distributable algorithms across huge data sets using a large number of commodity computers. • inspired by these concepts and introduced by Google in 2004 • MapReduce was introduced to solve large-data computational problems, and is specifically designed to run on commodity hardware. • It is based on divide-and-conquer principles — the input data sets are split into independent chunks, which are processed by the mappers in parallel.
  34. 34. MAPREDUCE-GETTING TO KNOW
  35. 35. MAPREDUCE-Execution Pipeline
  36. 36. MAPREDUCE-Runtime Coordination and Task Management
  37. 37. word count implementation-Map Phase
  38. 38. word count implementation-Reduce Phase
  39. 39. word count implementation- Driver
  40. 40. DESIGNING MAPREDUCE IMPLEMENTATIONS
  41. 41. Necessary questions to reformulate the initial problem in terms of MapReduce • How do you break up a large problem into smaller tasks? More specifically, how do you decompose the problem so that the smaller tasks can be executed in parallel? • Which key/value pairs can you use as inputs/outputs of every task? • How do you bring together all the data required for calculation? More specifically, how do you organize processing the way that all the data necessary for calculation is in memory at the same time?
  42. 42. Simple Data Processing with MapReduce
  43. 43. Inverted Indexes Example
  44. 44. Building Joins with MapReduce • Two “standard” implementations exist for joining data in MapReduce: • Reduce-side join • Map-side join • A most common implementation of a join is a reduce-side join. • Map-side join is very well in the case of one-to-one joins, where at most one record from every data set has the same key.
  45. 45. Road Enrichment Example
  46. 46. A simplified road enrichment algorithm 1. Find all links connected to a given node. For example, as shown in Figure, node N1 has links L1, L2, L3, and L4, while node N2 has links L4, L5, and L6. 2. Based on the number of lanes for every link at the node, calculate the road width at the intersection. 3. Based on the road width, calculate the intersection geometry. 4. Based on the intersection geometry, move the road’s end point to tie it to the intersection geometry.
  47. 47. Algorithm assumptions • A node is described with an object N with the key NN1… NNm. For example, node N1 can be described as NN1and N2 as NN2. All nodes are stored in the nodes input file. • A link is described with an object L with the key LL1… LLm. For example, link L1 can be described as LL1 , L2 as LL2, and so on. All the links are stored in the links source file. • Also introduce an object of the type link or node (LN), which can have any key. • Finally, it is necessary to define two more types — intersection (S) and road (R).
  48. 48. Phase 1 Calculation of Intersection Geometry and Moving the Road’s End Points Job
  49. 49. Phase 2 Merge Roads Job
  50. 50. Links Elevation Example • This problem can be defined as follows. Given a links graph and terrain model, convert two dimensional (x,y) links into three-dimensional (x, y, z) links. This process is called link elevation.
  51. 51. Simplified link elevation algorithm 1. Split every link into fixed-length fragments (for example, 10 meters). 2. For every piece, calculate heights (from the terrain model) for both start and end points of each link. 3. Combine pieces together into original links.
  52. 52. Phase 1 Split Links into Pieces and Elevate Each Piece Job
  53. 53. Phase 2 Combine Link’s Pieces into Original Links Job

×