Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Deploy hadoop cluster
1. Deploy Hadoop on Cluster
Install Hadoop in distributed mode
This document explains how to setup Hadoop on real cluster. Here one node will act as master and rest
(two) as slave. To get real power of Hadoop Multi-node cluster is used in the productions. In this
document we will use 3 machines to deploy Hadoop cluster
2. 2
Contents
1. Recommended Platform .................................................................................................................4
2. Prerequisites:.................................................................................................................................4
3. Install java 7 (recommended oracle java) ........................................................................................4
3.1Update the source list.................................................................................................................4
3.2 Install Java: ...............................................................................................................................4
4. Add entry of master and slaves in hosts file:.....................................................................................4
5. Configure SSH.................................................................................................................................4
5.1 Install Open SSH Server-Client....................................................................................................4
5.2 Generate key pairs ....................................................................................................................4
5.3 Configure password-less SSH......................................................................................................4
5.4 Check by SSH to slaves...............................................................................................................5
5. Download Hadoop..........................................................................................................................5
5.1 Download Hadoop.....................................................................................................................5
6. Install Hadoop................................................................................................................................5
6.1 Untar Tar ball............................................................................................................................5
6.2 Go to HADOOP_HOME_DIR........................................................................................................5
7. Setup Configuration:.......................................................................................................................5
7.1 Edit configuration file conf/hadoop-env.sh and set JAVA_HOME..................................................5
7.2 Edit configuration file conf/core-site.xml and add following entries:.............................................5
7.3 Edit configuration file conf/hdfs-site.xml and add following entries:.............................................5
7.4 Edit configuration file conf/mapred-site.xml and add following entries:........................................6
7.5 Edit configuration file conf/masters and add entry of secondary-master.......................................6
7.6 Edit configuration file conf/slaves and add entry of slaves ...........................................................6
7.7 Set environment variables .........................................................................................................6
8. Setup Hadoop on slaves..................................................................................................................6
8.1 Repeat the step-3 and step-4 on all the slaves.............................................................................6
8.2 Create tar ball of configured Hadoop-setup and copy to all the slaves: .........................................6
8.3 Untar configured Hadoop-setup on all the slaves ........................................................................6
9. Start The Cluster.............................................................................................................................6
9.1 Format the name node:.............................................................................................................6
9.2 Now start Hadoop services.........................................................................................................7
9.2.1 Start HDFS services .............................................................................................................7
3. 3
9.2.2 Start Map-Reduce services ..................................................................................................7
9.3. Check daemons status, by running jps command:.......................................................................7
9.3.1 On master ..........................................................................................................................7
9.3.2 On slaves-01:......................................................................................................................7
9.3.3 On slaves-02:......................................................................................................................7
10. Stop the cluster ............................................................................................................................7
10.1 Stop mapreduce services .........................................................................................................7
10.2 Stop HDFS services ..................................................................................................................7
4. 4
1. Recommended Platform
• OS: Ubuntu 12.04 or later (you can use other OS (cent OS, Redhat, etc))
• Hadoop: Cloudera distribution for Apache hadoop CDH3U6 (you can use Apache hadoop (0.20.X
/ 1.X))
2. Prerequisites:
• Java (oracle java is recommended for production)
• Password-less SSH setup (Hadoop need passwordless ssh from master to all the slaves, this is
required for remote script invocations)
Run following commands on the Master of Hadoop Cluster
3. Install java 7 (recommended oracle java)
3.1Update the source list
sudo apt-get update sudo apt-get install python-
software-properties sudo add-apt-repository
ppa:webupd8team/java sudo apt-get update
3.2 Install Java:
sudo apt-get install oracle-java7-installer
4. Add entry of master and slaves in hosts file:
Edit hosts file and following add entries
sudo nano /etc/hosts MASTER-IP
master
SLAVE01-IP slave-01
SLAVE02-IP slave-02
(In place of MASTER-IP, SLAVE01-IP, SLAVE02-IP put the value of corresponding IP)
5. Configure SSH
5.1 Install Open SSH Server-Client
sudo apt-get install openssh-server openssh-client
5.2 Generate key pairs
ssh-keygen -t rsa -P ""
5.3 Configure password-less SSH
Copy the contents of “$HOME/.ssh/id_rsa.pub” of master to “$HOME/.ssh/authorized_keys” all the
slaves.
5. 5
5.4 Check by SSH to slaves
ssh slave-01 ssh
slave-02
5. Download Hadoop
5.1 Download Hadoop
http://archive.cloudera.com/cdh/3/hadoop-0.20.2-cdh3u6.tar.gz
6. Install Hadoop
6.1 Untar Tar ball
tar xzf hadoop-0.20.2-cdh3u6.tar.gz
6.2 Go to HADOOP_HOME_DIR
cd hadoop-0.20.2-cdh3u6/
7. Setup Configuration:
7.1 Edit configuration file conf/hadoop-env.sh and set JAVA_HOME
export JAVA_HOME=path to be the root of your Java installation(eg: /usr/lib/jvm/jdk1.7.0_65)
7.2 Edit configuration file conf/core-site.xml and add following entries:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop_admin/hdata/hadoop-${user.name}</value>
</property>
</configuration>
7.3 Edit configuration file conf/hdfs-site.xml and add following entries:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
6. 6
</configuration>
7.4 Edit configuration file conf/mapred-site.xml and add following entries:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>
</configuration>
7.5 Edit configuration file conf/masters and add entry of secondary-master
slave-01
IP/Alias of node, where secondary-master will run
7.6 Edit configuration file conf/slaves and add entry of slaves
slave-01 slave-02
7.7 Set environment variables
Update ~/.bashrc and set or update the HADOOP_HOME and PATH shell variables as follows:
nano ~/.bashrc
export HADOOP_HOME=/home/hadoop/hadoop-0.20.2-cdh3u6
export PATH=$PATH:$HADOOP_HOME/bin Hadoop is
setup on master.
8. Setup Hadoop on slaves.
8.1 Repeat the step-3 and step-4 on all the slaves
Step-3: “install Java”
Step-4: “Add entry of master, slaves in hosts file”
8.2 Create tar ball of configured Hadoop-setup and copy to all the slaves:
tar czf hadoop.tar.gz hadoop-0.20.2-cdh3u6
scp hadoop.tar.gz slave01:~ scp
hadoop.tar.gz slave02:~
8.3 Untar configured Hadoop-setup on all the slaves
tar xzf hadoop.tar.gz
Run this command on all the slaves
9. Start The Cluster
9.1 Format the name node:
$bin/hadoop namenode –format
This activity should be done once when you install hadoop, else It will delete all your data from HDFS
7. 7
9.2 Now start Hadoop services
9.2.1 Start HDFS services
$bin/start-dfs.sh
Run this command on master
9.2.2 Start Map-Reduce services
$bin/start-mapred.sh
Run this command on master
9.3. Check daemons status, by running jps command:
9.3.1 On master
$jps
NameNode
JobTracker
9.3.2 On slaves-01:
$jps
TaskTracker
DataNode SecondaryNameNode
9.3.3 On slaves-02:
$jps
TaskTracker
DataNode
10. Stop the cluster
10.1 Stop mapreduce services
$bin/start-mapred.sh
Run this command on master
10.2 Stop HDFS services
$bin/start-dfs.sh
Run this command on master