Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

C-MR: Continuously Executing MapReduce Workflows on Multi-Core Processors

Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Chargement dans…3
×

Consultez-les par la suite

1 sur 22 Publicité

Plus De Contenu Connexe

Publicité

Similaire à C-MR: Continuously Executing MapReduce Workflows on Multi-Core Processors (20)

Plus récents (20)

Publicité

C-MR: Continuously Executing MapReduce Workflows on Multi-Core Processors

  1. 1. C-MR: Continuously Executing MapReduce Workflows on Multi- Core Processors Speaker: LIN Qian http://www.comp.nus.edu.sg/~linqian
  2. 2. Problem • Stream applications are often time-critical • Enabling stream support for MapReduce jobs – Simple for the Map operations – Hard for the Reduce operations • Continuously executing MapReduce workflows requires a great deal of coordination 1
  3. 3. C-MR Workflow • Windows: temporal subdivisions of a stream described by – size (the amount of the stream spanning) – slide (the interval between windows) 2
  4. 4. C-MR Programming Interface • Map/Reduce operations
  5. 5. C-MR Programming Interface (cont.1) • Input/Output streams
  6. 6. C-MR Programming Interface (cont.2) • Create workflows of continuous MapReduce jobs
  7. 7. C-MR vs. MapReduce • MapReduce computing nodes receive a set of Map or Reduce tasks and each node must wait for all other nodes to complete their tasks before being allocated additional tasks. • C-MR uses pull-based data acquisition allowing computing nodes to execute any Map or Reduce workload as they are able. Thus, straggling nodes will not hinder the progress of the other nodes if there is data available to process elsewhere in the workflow. 6
  8. 8. C-MR Architecture 7
  9. 9. Stream and Window Management • The merged output streams are not guaranteed to retain their original orderings. • Solution: Replicating window-bounding punctuations
  10. 10. Stream and Window Management (cont.1) A node consumes the punctuation from the sorted input stream-buffer 9
  11. 11. Stream and Window Management (cont.2) Replicate that punctuation to the other nodes
  12. 12. Stream and Window Management (cont.3) After all replicas are received at the intermediate buffer, collect data whose timestamps fall into the applicable interval and materialize them as a window
  13. 13. Operator Scheduling • Scheduling framework – Execute multiple policies simultaneously – Transition between policies based on resource availability • Scheduling policies
  14. 14. Incremental Computation Output1 = d1 + d2 + d3 + ... + dn Output2 = d2 + d3 + d4 + ... + dn+1 Output3 = d3 + d4 + d5 + ... + dn+2 Output4 = d4 + d5 + d6 + ... + dn+3 Share the common data subset of computation
  15. 15. Evaluation • Continuously executing a MapReduce job – Compare with Phoenix++ 14
  16. 16. Evaluation (cont.1) • Operator scheduling – Oldest data first (ODF) – Best memory trade-off (MEM) – Hybrid utilization of both policies 15
  17. 17. Evaluation (cont.2) • Workflow optimization 16
  18. 18. Evaluation (cont.3) • Workflow optimization – Latency and throughput 17
  19. 19. Thank you 18
  20. 20. Two Properties of Streams • Unbounded • Accessed sequentially Hard to be handled using traditional DBMS 19
  21. 21. Query Operators • Unbounded stateful operators – maintain state with no upper bound in size  run out of memory • Blocking operators – read an entire input before emitting a single output  might never produce a result • Never use them, or • Use them under a refactoring 20
  22. 22. Punctuations • Mark the end of substreams – allowing us to view an infinite stream as a mixture of finite streams 21

Notes de l'éditeur

  • Repeatedly invoking a Phoenix++ MapReduce job over a stream results in many redundant computations (at both Map and Reduce operations). C-MR allows data to be processed only once by Map and the inclusion of the Combine operator significantly decreases redundant work performed at the Reduce operator.
  • 1. Data is often generated from a source that can potentially produce an unbounded stream.2. A stream’s contents can only be accessed sequentially.Traditional queries are comprised of relational operators that assume a finite data source that can be accessed randomly.

×