So you want to get started with Hadoop, but how. This session will show you how to get started with Hadoop development using Pig. Prior Hadoop experience is not needed.
Thursday, May 8th, 02:00pm-02:50pm
10. The heart of the System.
Maintains a virtual File Directory.
11. The heart of the System.
Maintains a virtual File Directory.
Tracks all the nodes.
12. The heart of the System.
Maintains a virtual File Directory.
Tracks all the nodes.
Listens for “heartbeats” and “Block Reports”
(more on this later).
13. The heart of the System.
Maintains a virtual File Directory.
Tracks all the nodes.
Listens for “heartbeats” and “Block Reports”
(more on this later).
If the NameNode is down, the cluster is
offline.
17. Add a Data Node:
The Data Node says “Hello” to the Name
Node.
18. Add a Data Node:
The Data Node says “Hello” to the Name
Node.
The Name Node offers the Data Node a
handshake with version requirements.
19. Add a Data Node:
The Data Node says “Hello” to the Name
Node.
The Name Node offers the Data Node a
handshake with version requirements.
The Data Node replies back to the Name
Node, “Okay”, or “Shuts Down”.
20. Add a Data Node:
The Data Node says “Hello” to the Name
Node.
The Name Node offers the Data Node a
handshake with version requirements.
The Data Node replies back to the Name
Node, “Okay”, or “Shuts Down”.
The Name Node hands the Data Node a
NodeId that it remembers.
.
21. Add a Data Node:
The Data Node says “Hello” to the Name
Node.
The Name Node offers the Data Node a
handshake with version requirements.
The Data Node replies back to the Name
Node, “Okay”, or “Shuts Down”.
The Name Node hands the Data Node a
NodeId that it remembers.
The Data Node is now part of cluster and it
checks in with the Name Node every 3
seconds.
24. Data Node Heartbeat:
The “check-in” is a simple HTTP
Request/Response.
This "check-in" is very important
communication protocol that guarantees the
health of the cluster.
25. Data Node Heartbeat:
The “check-in” is a simple HTTP
Request/Response.
This "check-in" is very important
communication protocol that guarantees the
health of the cluster.
Block Reports – what data I have and is it
okay.
26. Data Node Heartbeat:
The “check-in” is a simple HTTP
Request/Response.
This "check-in" is very important
communication protocol that guarantees the
health of the cluster.
Block Reports – what data I have and is it
okay.
Name Node controls the Data Nodes by
issuing orders when they return and report
their status.
27. Data Node Heartbeat:
The “check-in” is a simple HTTP
Request/Response.
This "check-in" is very important
communication protocol that guarantees the
health of the cluster.
Block Reports – what data I have and is it
okay.
Name Node controls the Data Nodes by
issuing orders when they return and report
their status.
Replicate Data, Delete Data, Verify Data
28. Data Node Heartbeat:
The “check-in” is a simple HTTP
Request/Response.
This "check-in" is very important
communication protocol that guarantees the
health of the cluster.
Block Reports – what data I have and is it
okay.
Name Node controls the Data Nodes by
issuing orders when they return and report
their status.
Replicate Data, Delete Data, Verify Data
Same process for all nodes within a cluster.
30. The client “tells” the NameNode the
virtual directory location for the file.
31. A64 B64 C28
The client “tells” the NameNode the
virtual directory location for the file.
The Client breaks the file into 64MB
“blocks”
32. A64 B64 C28
The client “tells” the NameNode the
virtual directory location for the file.
The Client breaks the file into 64MB
“blocks”
The client “ask” the NameNode where
the blocks go.
33. A64 B64 C28
A64 B64 C28
The client “tells” the NameNode the
virtual directory location for the file.
The Client breaks the file into 64MB
“blocks”
The client “ask” the NameNode where
the blocks go.
The Client “stream” the blocks, in
parallel, to the DataNodes.
34. A64 B64 C28
The client “tells” the NameNode the
virtual directory location for the file.
The Client breaks the file into 64MB
“blocks”
The client “ask” the NameNode where
the blocks go.
The Client “stream” the blocks, in
parallel, to the DataNodes.
DataNode(s) tells the NameNode they
have the data via the block report
35. The client “tells” the NameNode the
virtual directory location for the file.
The Client breaks the file into 64MB
“blocks”
The client “ask” the NameNode where
the blocks go.
The Client “stream” the blocks, in
parallel, to the DataNodes.
DataNode(s) tells the NameNode they
have the data via the block report
The NameNode tells the DataNode
where to replicate the block.
A64 A64
A64
38. The client tells the NameNode it would
like to read a file.
The NameNode reply’s with the list of
blocks and the nodes the blocks are on.
39. A64
B64 C28
The client tells the NameNode it would
like to read a file.
The NameNode reply’s with the list of
blocks and the nodes the blocks are on.
The client request the first block from a
DataNode
40. B64 C28
A64
The client tells the NameNode it would
like to read a file.
The NameNode reply’s with the list of
blocks and the nodes the blocks are on.
The client request the first block from a
DataNode
The client compares the checksum of the
block against the manifest from the
NameNode.
41. The client tells the NameNode it would
like to read a file.
The NameNode reply’s with the list of
blocks and the nodes the blocks are on.
The client request the first block from a
DataNode
The client compares the checksum of the
block against the manifest from the
NameNode.
The client moves on to the next block in
the sequence until the file has been read.
B64 C28
A64 B64 C28
44. A Data Node Fails to “check-in”
After 10 minutes the Name Node gives up
on that Data Node.
A64
45. A Data Node Fails to “check-in”
After 10 minutes the Name Node gives up
on that Data Node.
When another node that has blocks
originally assigned to the lost node
checks-in, the name node sends a block
replication command. A64A64
A64
46. A Data Node Fails to “check-in”
After 10 minutes the Name Node gives up
on that Data Node.
When another node that has blocks
originally assigned to the lost node
checks-in, the name node sends a block
replication command.
The Data Node replicates that block of
data. (Just like a write)
A64A64
A64A64
50. HDFS Shell Commands.
> hadoop fs -copyFromLocal <localsrc>
URI
Copy a file from your client to HDFS.
Similar to put command, except that the source
is restricted to a local file reference.
52. HDFS Shell Commands.
> hadoop fs -copyToLocal URI
<localdst>
Copy a file from HDFS to your client.
Similar to get command, except that the
destination is restricted to a local file reference.
55. Basic Data Types:
Strings, Integers, Doubles, Longs, Byte, Boolean,
etc.
Advanced Data Types:
Tuples and Bags
56. Tuples are JSON like and simple.
raw_data: {
date_time: bytearray,
seconds: bytearray
}
57. Bags hold Tuples and Bags
element: {
date_time: bytearray,
seconds: bytearray
group: chararray,
ordered_list: {
date: chararray,
hour: chararray,
score: long
}
}
58. Expert Advice:
Always know your data structures.
They are the foundation for all Map Reduce operations.
Complex (deep) data structures will kill -9 performance.
Keep them simple!
60. GRUNT
Grunt is a command line interface used to debug
pig jobs. Similar to Ruby IRB or Groovy CLI.
Grunt is your best weapon against bad pigs.
pig -x local
Grunt> |
61. GRUNT
Grunt> describe Element
Describe will display the data structure of an
Element
Grunt> dump Element
Dump will display the data represented by an
Element
62. GRUNT
> describe raw_data
Produces the output:
> raw_data: { date_time: bytearray,
items: bytearray }
Or in a more human readable form:
Raw_data: {
date_time: bytearray,
items: bytearray
}
63. GRUNT
> dump raw_data
You can dump terabytes of data to your screen,
so be careful.
(05/10/2011 20:30:00.0,0)
(05/10/2011 20:45:00.0,0)
(05/10/2011 21:00:00.0,0)
(05/10/2011 21:15:00.0,0)
...
65. Most PIG commands are assignments.
• The element names the collection of records that exist out in
the cluster.
• It’s not a traditional programming variable.
• It describes the data from the operation.
• It does not change.
Element = Operation;
66. The SET command
Used to set a hadoop job variable. Like the name of your pig
job.
SET job.name 'Day over Day - [$input]’;
67. The REGISTER and DEFINE commands
-- Setup udf jars
REGISTER $jar_prefix/sidekick-hadoop-0.0.1.jar
DEFINE BUCKET_FORMAT_DATE
com.sidekick.hadoop.udf.UnixTimeFormatter('MM/dd/
yyyy HH:mm', 'HH');
68. The LOAD USING command
-- load in the data from HDFS
raw_data = LOAD '$input' USING
PigStorage('t') AS (date_time, items);
69. The FILTER BY command
Selects tuples from a relation based on some condition.
-- filter to the week we want
broadcast_week = FILTER bucket_list BY (date >=
'03-Oct-2011') AND (date <= '10-Oct-2011');
70. The GROUP BY command
Groups the data in one or multiple relations.
daily_stats = GROUP broadcast_week BY (date,
hour);
71. The FOREACH command
Generates data transformations based on columns of data.
bucket_list = FOREACH raw_data GENERATE
FLATTEN(DATE_FORMAT_DATE(date_time)) AS date,
MINUTE_BUCKET(date_time) AS hour,
MAX_ITEMS(items) AS items;
*DATE_FORMAT_DATE is a user defined function, an advanced topic we’ll come to in a minute.
72. The GENERATE command
Use the FOREACH GENERATE operation to work with columns
of data.
bucket_list = FOREACH raw_data GENERATE
FLATTEN(DATE_FORMAT_DATE(date_time)) AS date,
MINUTE_BUCKET(date_time) AS hour,
MAX_ITEMSS(items) AS items;
73. The FLATTEN command
FLATTEN substitutes the fields of a tuple in place of the tuple.
traffic_stats = FOREACH daily_stats GENERATE
FLATTEN(GROUP),
COUNT(broadcast_week) AS cnt,
SUM(broadcast_week.items) AS total;
74. The STORE INTO USING command
Store function determine how data stored after a pig job.
-- All done, now store it
STORE final_results INTO '$output' USING
PigStorage();