2. Q. WHAT IS BIG DATA????
• Big data is a term used to describe the voluminous
amount of unstructured and semi-structured data a
company creates.
• Data that would take too much time and cost too
much money to load into a relational database for
analysis.
• Big data doesn't refer to any specific quantity, the
term is often used when speaking about petabytes
and exabytes of data.
3. SO WHAT IS THE PROBLEM??
• The problem is that while the storage capacities of
hard drives have increased massively over the years,
access speeds—the rate at which data can be read from
drives have not kept up.
• One typical drive from 1990 could store 1370 MB of
data and had a transfer speed of 4.4 MB/s, so we could
read all the data from a full drive in around 300 seconds.
• In 2010, 1 Tb drives are the standard hard disk size,
but the transfer speed is around 100 MB/s, so it takes
more than two and a half hours to read all the data off
the disk.
4. Possible solutions!!!!
• Parallelization- Multiple processors or CPU’s in a
single machine
• Distributed Computing- Multiple computers
connected via a network
The key issues involved in this Solution:
• Hardware failure
• Combine the data after analysis
• Network Associated Problems
5. TO THE RESCUE!!!!...HADOOP!!
• framework for storing and processing big data on lots of
commodity machines.
• Open Source Apache project
• High reliability done in software
• Implemented in Java
• A common way of avoiding data loss is through replication
• Hadoop is the popular open source implementation of
MapReduce, a powerful tool designed for deep analysis and
transformation of very large data sets.
6. INTRODUCTION
Hadoop has two main layers:
•Computation layer: The computation tier uses framework called
MapReduce.
•Distributed storage layer: A distributed filesystem called HDFS provides
storage.
7. WHY HADOOP???
• Building bigger and bigger servers is no longer necessarily the
best solution to large-scale problems. Nowadays the popular
approach is to tie together many low-end machines together
as a single functional distributed system. For example,
• A high-end machine with four I/O channels each having a
throughput of 100 MB/sec will require three hours to read a 4
TB data set! With Hadoop, this same data set will be divided
into smaller (typically 64 MB) blocks that are spread among
many machines in the cluster via the Hadoop Distributed File
System (HDFS).
• With a modest degree of replication, the cluster machines can
read the data set in parallel and provide a much higher
throughput. Moreover its cheaper than one high-end server!
8. For computationally intensive work,
• Most of the distributed systems are having approach of moving
the data to the place where computation will take place And
after the computation, the resulting data is moved back for
storage. This approach works fine for computationally intensive
work.
For data-intensive work,
• We need other better approach, Hadoop has better philosophy
toward that Because Hadoop focuses on moving code/algorithm
to data instead data to the code/algorithm.
• The move-code-to-data philosophy applies within the Hadoop
cluster itself, And data is broken up and distributed across the
cluster, And computation on a piece of data takes place on the
same machine where that piece of data resides.
Hadoop philosophy of move-code-to-data makes more sense As
we know the code/algorithm are always smaller than the Data
hence code/algorithm is easier to move around.
WHY HADOOP???
12. Namenodes and Datanodes!!!
• a namenode (the master) and a
number of datanodes (workers).
• The namenode manages the
filesystem namespace. It maintains
the filesystem tree and the
metadata for all the files and
directories in the tree.
• Datanodes are the work horses of
the filesystem. They store and
retrieve blocks when they are told
to (by clients or the namenode),
and they report back to the
namenode periodically with lists
of blocks that they are storing.
14. MAPREDUCE!!!
• Jobtracker receives map-reduce job execution request
from Client.
• Does sanity checks to see if the job is configured properly.
• Computes the input splits.
• Loads resources required for the job into HDFS
• Assigns splits to tasktrackers for map and reduce phases
• Map split assignment is data-locality-aware
• Single point of failure
• Tasktracker creates a new process for the task and
executes it.
• Sends periodic heartbeats to the Jobtracker, along with
other information about the task.
17. ADVANTAGES
• Hadoop is a platform which provides Distributed storage &
Computational capabilities both.
• Hadoop is extremely scalable
• optimized for high throughput.
• HDFS uses large block sizes that ultimately helps It works best when
manipulating large files
• Scalability and Availability are the distinguished features of
HDFS to achieve data replication and fault tolerance system.
• HDFS can replicate files for specified number of times that is tolerant of
software and hardware failure
• Hadoop uses MapReduce framework which is a batch-based, distributed
computing framework, Itallows paralleled work over a large amount of
data.
• MapReduce let the developers to focus on addressing business needs
only, rather than getting involved in distributed system complications.
• MapReduce decomposes the job into Map & Reduce tasks and schedules
them for remote execution.
18. DISADVANTGES
• As you know Hadoop uses HDFS and MapReduce, Both of
their master processes are single points of failure, Although
there is active work going on for High Availability versions.
• Until the Hadoop 2.x release, HDFS and MapReduce will be
using single-master models which can result in single points of
failure.
• Security is also one of the major concern because Hadoop
does offer a security model But by default it is
disabled because of its high complexity.
• Hadoop does not offer storage or network level encryption
• HDFS is inefficient for handling small files, and it
lacks transparent compression.
• MapReduce is a batch-based architecture that means it does
not lend itself to use cases which needs real-time data access.
• MapReduce is a shared-nothing architecture hence Tasks that
require global synchronization or sharing of mutable data are
not a good fit .