2. Big Data
• Big Data
– More than a single machine can process
– 100s of GB, TB, ?PB!!
• Why the Buzz now ?
– Low Storage Costs
– Increased Computing power
– Potential for Tremendous Insights
– Data for Competitive Advantage
– Velocity , Variety , Volume
• Structured
• Unstructured - Text, Images, Video, Voice, etc
6. Traditional Large-Scale Computation
• Computation has been Processor and Memory bound
– Relatively small amounts of data
– Faster processor , more RAM
• Distributed Systems
– Multiple machines for a single job
– Data exchange requires synchronization
– Finite bandwidth is available
– Temporal dependencies are complicated
– Difficult to deal with partial failures
• Data is stored on a SAN
• Data is copied to compute nodes
7. Data Becomes the Bottleneck
• Moore’s Law has held firm over 40 years
– Processing power doubles every two years
– Processing speed is no longer the problem
• Getting the data to the processors becomes the
bottleneck
• Quick Math
– Typical disk data transfer rate : 75MB/Sec
– Time taken to transfer 100GB of data to the processor ? 22
minutes !!
• Assuming sustained reads
• Actual time will be worse, since most servers have less than 100 GB of
RAM
8. Need for New Approach
• System must support partial Failure
– Failure of a component should result in a graceful degration of
application performance
• If a component of the system fails, the workload needs to be
picked up by other components.
– Failure should not result in loss of any data
• Components should be able to join back upon recovery
• Component failures during execution of a job impact the
outcome of the job
• Scalability
– Increasing resources should support a proportional incease in load
capacity.
9. Traditional vs Hadoop Approach
• Traditional
– Application Server: Application resides
– Database Server: Data resides
– Data is retrieved from database and sent to application for
processing.
– Can become network and I/O intensive for GBs of data.
• Hadoop
– Distribute data on to multiple commodity hardware (data
nodes)
– Send application to data nodes instead and process data in
parallel.
10. What is Hadoop
• Hadoop is a framework for distributed data processing with
distributed filesystem.
• Consists of two core components
– The Hadoop Distributed File System (HDFS)
– MapReduce
• Map and then Reduce
• A set of machines running HDFS and MapReduce is known as a
Hadoop Cluster.
• Individual machines are known as Nodes
11. Core Hadoop Concepts
• Distribute data as it is stored in to the system
• Nodes talk to each other as little as possible
–Shared Nothing Architecture
• Computation happens where the data is stored.
• Data is replicated multiple times on the system
for increased availability and reliability.
• Fault Tolerance
12. HDFS -1
• HDFS is a file system written in Java
– Sits on top of a native filesystem
• Such as ext3, ext4 or xfs
• Designed for storing very large files with streaming data access patterns
• Write-once, read-many-times pattern
• No Random writes to files are allowed
– High Latency, High Throughput
– Provides redundant storage by replicating
– Not Recommended for lot of small files and low latency requirements
13. HDFS - 2
• Files are split into blocks of size 64MB or 128 MB
• Data is distributed across machines at load time
• Blocks are replicated across multiple machines, known as DataNodes
– Default replication is 3 times
• Two types of Nodes in HDFS Cluster – (Master – worker Pattern)
– NameNode (Master)
• Keeps track of blocks that make up a file and where those blocks are located
– DataNodes
• Hold the actual blocks
• Stored as standard files in a set of directories specified in the Configuration
15. HDFS - 3
• HDFS Daemons
– NameNode
• Keeps track of which blocks make up a file and where those blocks are
located
• Cluster is not accessible with out a NameNode
– Secondary NameNode
• Takes care of house keeping activities for NameNode (not a backup)
– DataNode
• Regularly report their status to namenode in a heartbeat
• Every hour or at startup sends a block report to NameNode
16. HDFS - 4
• Blocks are replicated across multiple machines, known as
DataNodes
• Read and Write HDFS files via the Java API
• Access to HDFS from the command line is achieved with the
hadoop fs command
– File system commands – ls, cat, put, get,rm
21. MapReduce
• OK, You have data in HDFS, what next?
• Processing via Map Reduce
• Parallel processing for large amounts of data
• Map function, Reduce function
• Data Locality
• Code is shipped to the node closest to data
• Java, Python, Ruby, Scala…
22. MapReduce Features
• Automatic parallelization and distribution
• Fault-tolerance (Speculative Execution)
• Status and monitoring tools
• Clean abstraction for programmers
–MapReduce programs usually written in Java (Hadoop Streaming)
• Abstracts all the house keeping activities from developers
27. Hadoop Cluster
Job Tracker
Name Node
MapReduce
HDFS
Task Tracker Task Tracker
Data Node Data Node
1,2,3 … n
Master Node Worker processes
28. MapReduce
• MR1
• only MapReduce
• aka - batch mode
• MR2
• YARN based
• MR is one implementation on top of YARN
• batch, interactive, online, streaming….
33. How do you write MapReduce?
• Map Reduce is a basic building block,
programming approach to solve larger problems
• In most real world applications, multiple MR jobs
are chained together
• Processing pipeline - output of one - input to
another
34. How do you write MapReduce?
• Map Reduce is a basic building block,
programming approach to solve larger problems
• In most real world applications, multiple MR jobs
are chained together
• Processing pipeline - output of one - input to
another
39. Hive and Pig
• MapReduce code is typically written in Java
• Requires:
– A good java programmer
– Who understands MapReduce
– Who understands the problem
– Who will be available to maintain and update the
code in the future as requirements change
40. Hive and Pig
• Many organizations have only a few developers who
can write good MapReduce code
• Many other people want to analyze data
– Business analysts
– Data scientists
– Statisticians
– Data analysts
• Need a higher level abstraction on top of MapReduce
– Ability to query the data without needing to know
MapReduce intimately
– Hive and Pig address these needs
42. Hive
• Abstraction on top of MapReduce
• Allows users to query data in the Hadoop cluster
without knowing Java or MapReduce
• Uses HiveQL Language
– Very Similar to SQL
• Hive Interpreter runs on a client machine
– Turns HiveQL queries into MapReduce jobs
– Submits those jobs to the cluster
43. Hive Data Model
• Hive ‘layers’ table definitions on top of data in
HDFS
• Tables
– Typed columns
• TINYINT , SMALLINT, INT, BIGINT
• FLOAT,DOUBLE
• STRING
• BINARY
• BOOLEAN, TIMESTAMP
• ARRAY , MAP, STRUCT
44. Hive Metastore
• Hive’s Metastore is a database containing table
definitions and other metadata
– By default, stored locally on the client machine in a
Derby database
– Usually MySQL if the metastore needs to be shared
across multiple people
– table definitions on top of data in HDFS
46. Hive Examples
!Data$is$loaded$into$Hive$with$the$LOAD DATA INPATH$statement$
– Assumes"that"the"data"is"already"in"HDFS"
!If$the$data$is$on$the$local$filesystem,$use$LOAD DATA LOCAL INPATH
– Automa@cally"loads"it"into"HDFS"in"the"correct"directory"
Loading"Data"Into"Hive"
LOAD DATA INPATH "shakespeare_freq" INTO TABLE shakespeare;
47. Hive Examples
!Data$is$loaded$into$Hive$with$the$LOAD DATA INPATH$statement$
– Assumes"that"the"data"is"already"in"HDFS"
!If$the$data$is$on$the$local$filesystem,$use$LOAD DATA LOCAL INPATH
– Automa@cally"loads"it"into"HDFS"in"the"correct"directory"
Loading"Data"Into"Hive"
LOAD DATA INPATH "shakespeare_freq" INTO TABLE shakespeare;
48. Hive Limitations
• Not all ‘standard’ SQL is supported
– Subqueries are only supported in the FROM clause
• No correlated subqueries
– No support for UPDATE or DELETE
– No support for INSERTing single rows
51. Pig
• Originally created at yahoo
• High-level platform for data analysis and processing on Hadoop
• Alternative to writing low-level MapReduce code
• Relatively simple syntax
• Features
– Joining datasets
– Grouping data
– Loading non-delimited data
– Creation of user-defined functions written in java
– Relational operations
– Support for custom functions and data formats
– Complex data structures
52. Pig
• No shared metadata like Hive
• Installation requires no modification to the cluster
• Components of Pig
– Data flow language – Pig Latin
• Scripting Language for writing data flows
– Interactive Shell – Grunt Shell
• Type and execute Pig Latin Statements
• Use pig interactively
– Pig Interpreter
• Interprets and executes the Pig Latin
• Runs on the client machine
54. Pig Interpreter
1. Preprocess and parse Pig Latin
2. Check data types
3. Make optimizations
4. Plan execution
5. Generate MapReduce jobs
6. Submit jobs to Hadoop
7. Monitor progress
55. Pig Concepts
• A single element of data is an atom
• A collection of atoms – such as a row or a
partial row – is a tuple
• Tuples are collected together into bags
• PigLatin script starts by loading one or more
datasets into bags, and then creates new bags
by modifying those it already has
59. References
•Hadoop Definitive Guide - Tom White - must
read!
•http://highlyscalable.wordpress.com/2012/02/
01/mapreduce-patterns/
•http://hortonworks.com/hadoop/yarn/
When your data sets become so large that you have to start innovating around how to collect, store, organize, analyze and share it
Facebook ,Flickr – images
LinkedIn, Twitter, Facebook, Pinterest , Open Data
Write Once – Read many times pattern --A dataset is typically generated or copied from source, and then various analyses are performed on that dataset over time. Each analysis will involve a large proportion, if not all, of the dataset, so the time to read the whole dataset is more important than the latency in reading the first record. Redundancy is not the same as availability. High availability is a bit more patchy - but getting better. NN availability
1. Splits and blocks important to understand in HDFS – drives how MR works
NameNode – just another machine on the cluster where a NameNode is running
Data
How is the file , loaded accessed
Client application communicated with the NameNode to determine while blocks make up the file and which datanode those blocks reside on , it then directly communicates with the DataNodes to read the data. NameNode will not be the bottleneck
Namenode being the single point of failure
What pieces of data is persistently stored
What data is gathered at the startup --
The namenode manages the filesystem namespace. It maintains the filesystem tree and the metadata for all the files and directories in the tree. This information is stored persistently on the local disk in the form of two files: the namespace image and the edit log. The namenode also knows the datanodes on which all the blocks for a given file are located; however, it does not store block locations persistently, because this information is reconstructed from datanodes when the system starts.
NameNode – just another machine on the cluster where a NameNode is running
Data
NameNode – just another machine on the cluster where a NameNode daemon is running
Data
NameNode – just another machine on the cluster where a NameNode daemon is running
Data
NameNode – just another machine on the cluster where a NameNode daemon is running
Data
NameNode – just another machine on the cluster where a NameNode daemon is running
Data
YARN – more complex architecture to give more flexibility into building distributed apps on cluster
How YARN Works
The fundamental idea of YARN is to split up the two major responsibilities of the JobTracker/TaskTracker into separate entities:
a global ResourceManager
a per-application ApplicationMaster.
a per-node slave NodeManager and
a per-application Container running on a NodeManager
What makes a data scientist different from a data engineer? Most data engineers can write machine learning services perfectly well or do complicated data transformation in code. It’s not the skill that makes them different, it’s the focus: data scientists focus on the statistical model or the data mining task at hand, data engineers focus on coding, cleaning up data and implementing the models fine-tuned by the data scientists.
What is the difference between a data scientist and a business/insight/data analyst? Data scientists can code and understand the tools! Why is that important? With the emergence of the new tool sets around data, SQL and point & click skills can only get you so far. If you can do the same in Spark or Cascading your data deep dive will be faster and more accurate than it will ever be in Hive. Understanding your way around R libraries gives you statistical abilities most analysts only dream of. On the other hand, business analysts know their subject area very well and will easily come up with many different subject angles to approach the data.
Data scientists use their advanced statistical skills to help improve the models the data engineers implement and to put proper statistical rigour on the data discovery and analysis the customer is asking for. Essentially the business analyst is just one of many customers
https://medium.com/@KevinSchmidtBiz/data-engineer-vs-data-scientist-vs-business-analyst-b68d201364bc
Data Analysis :
Finding relevant records in a massive data set
Querying multiple data sets
Calculate values from input data
Data Processing:
Re-organizing an existing data set
Joining data from multiple sources to produce a new data set
Data Analysis :
Finding relevant records in a massive data set
Querying multiple data sets
Calculate values from input data
Data Processing:
Re-organizing an existing data set
Joining data from multiple sources to produce a new data set
Data Analysis :
Finding relevant records in a massive data set
Querying multiple data sets
Calculate values from input data
Data Processing:
Re-organizing an existing data set
Joining data from multiple sources to produce a new data set
Data Analysis :
Finding relevant records in a massive data set
Querying multiple data sets
Calculate values from input data
Data Processing:
Re-organizing an existing data set
Joining data from multiple sources to produce a new data set
Pig can help extract value information from Web server log files
Logs process logs (pig) generate clickstream data for user sessions
Data Sampling
Help explore a representative portion of a large dataset
100 TB –Random Sampling (PIG) - 50 MB
ETL processing
Pig is also widely used for Extract , Transform, and Load (ETL) processing
data sources validate data fix errors remove duplicates –encode values ----------Datawarehouse
Pig can help extract value information from Web server log files
Logs process logs (pig) generate clickstream data for user sessions
Data Sampling
Help explore a representative portion of a large dataset
100 TB –Random Sampling (PIG) - 50 MB
ETL processing
Pig is also widely used for Extract , Transform, and Load (ETL) processing
data sources validate data fix errors remove duplicates –encode values ----------Datawarehouse