SlideShare une entreprise Scribd logo
1  sur  19
Apache Hadoop Install Example
● Using Ubuntu 12.04
● Java 1.6
● Hadoop 1.2.0
● Static DNS
● 3 Machine Cluster
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 1
● Install Ubuntu Linux 12.04 on each machine
● Assign a hostname and static IP address to each machine
● Names used here
– hc1nn ( hadoop cluster 1 name node )
– hc1r1m1 ( hadoop cluster 1 rack 1 machine 1 )
– hc1r1m2 ( hadoop cluster 1 rack 1 machine 2 )
● Install ssh daemon on each server
● Install vsftpd ( ftp ) deamon on each server
● Update /etc/host with all hostnames on each server
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 2
● Generate ssh keys for each server under hadoop user
● Copy keys to all server's hadoop account
● Install java 1.6 ( we used openjdk )
● Obtain the Hadoop software from
– hadoop.apache.org
– Unpack Hadoop software to /usr/local
● Now consider cluster architecture
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 3
● Start will three single installs
– For Hadoop
● Then cluster the
– Hadoop machines
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 4
● Ensure auto shh
– From name node (hc1nn) to both data nodes
– From each machine to itself
● Create symbolic link
– Named hadoop
– Pointing to /usr/local/hadoop-1.2.0
● Set up Bash .bashrc on each machine hadoop user set
– HADOOP_HOME
– JAVA_HOME
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 5
● Create Hadoop tmp dir on all servers
sudo mkdir -p /app/hadoop/tmp
sudo chown hadoop:hadoop /app/hadoop/tmp
sudo chmod 750 /app/hadoop/tmp
● Set Up conf/core-site.xml
– ( on all servers )
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 5
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 6
● Set Up conf/mapred-site.xml
– ( on all servers )
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 7
● Set Up conf/hdfs-site.xml
– ( on all servers )
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 8
● Format the Hadoop file system ( on all servers )
– hadoop namenode -format
– Dont do this on a running HDFS you will lose all data !!
● Now start Hadoop ( on all servers )
– $HADOOP_HOME/bin/start-all.sh
● Check Hadoop is running with
– sudo netstat -plten | grep java
– you should see ports like 54310 and 54311 being used.
● All Good ? Stop Hadoop on all servers
– $HADOOP_HOME/bin/stop-all.sh
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 9
● Now set up the cluster – do on all servers
● Set $HADOOP_HOME/conf/masters file to contain
– hc1nn
● Set $HADOOP_HOME/conf/slaves file to contain
– hc1r1m1
– hc1r1m2
– hc1nn
● We will be using the name node as a data node as well
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 10
on all machines
● Change conf/core-site.xml on all machines
– fs.default.name = hdfs://hc1nn:54310
● Change conf/mapred-site.xml
– mapred.job.tracker = hc1nn:54311
● Change conf/hdfs-site.xml
– dfs.replication = 3
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 11
● Now reformat the HDFS on hc1nn
– hadoop namenode -format
● On name node start HDFS
– $HADOOP_HOME/bin/start-dfs.sh
● On name node start Map Reduce
– $HADOOP_HOME/bin/start-mapred.sh
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 12
● Run a test Map Reduce job
– I have data in /tmp/gutenberg
● Load Data into HDFS
hadoop dfs -copyFromLocal /tmp/gutenberg /usr/hadoop/gutenberg
● List Data in HDFS
hadoop dfs -ls /usr/hadoop/gutenberg
Found 18 items
-rw-r--r-- 3 hadoop supergroup 674389 2013-07-30 19:31 /usr/hadoop/gutenberg/pg20417.txt
-rw-r--r-- 3 hadoop supergroup 674389 2013-07-30 19:31 /usr/hadoop/gutenberg/pg20417.txt1
...............
-rw-r--r-- 3 hadoop supergroup 834980 2013-07-30 19:31 /usr/hadoop/gutenberg/pg5000.txt4
-rw-r--r-- 3 hadoop supergroup 834980 2013-07-30 19:31 /usr/hadoop/gutenberg/pg5000.txt5
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 13
● Run the Map Reduce job
cd $HADOOP_HOME
hadoop jar hadoop*examples*.jar wordcount /usr/hduser/gutenberg /usr/hduser/gutenberg-output
● Check the output
13/07/30 19:34:13 INFO input.FileInputFormat: Total input paths to process : 18
13/07/30 19:34:13 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/07/30 19:34:14 INFO mapred.JobClient: Running job: job_201307301931_0001
13/07/30 19:34:15 INFO mapred.JobClient: map 0% reduce 0%
13/07/30 19:34:26 INFO mapred.JobClient: map 11% reduce 0%
13/07/30 19:34:34 INFO mapred.JobClient: map 16% reduce 0%
13/07/30 19:34:35 INFO mapred.JobClient: map 22% reduce 0%
13/07/30 19:34:42 INFO mapred.JobClient: map 33% reduce 0%
13/07/30 19:34:43 INFO mapred.JobClient: map 33% reduce 7%
13/07/30 19:34:48 INFO mapred.JobClient: map 44% reduce 7%
13/07/30 19:34:52 INFO mapred.JobClient: map 44% reduce 14%
13/07/30 19:34:54 INFO mapred.JobClient: map 55% reduce 14%
13/07/30 19:35:01 INFO mapred.JobClient: map 66% reduce 14%
13/07/30 19:35:02 INFO mapred.JobClient: map 66% reduce 18%
13/07/30 19:35:06 INFO mapred.JobClient: map 72% reduce 18%
13/07/30 19:35:07 INFO mapred.JobClient: map 77% reduce 18%
13/07/30 19:35:08 INFO mapred.JobClient: map 77% reduce 25%
13/07/30 19:35:12 INFO mapred.JobClient: map 88% reduce 25%
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 13
13/07/30 19:35:17 INFO mapred.JobClient: map 88% reduce 29%
13/07/30 19:35:18 INFO mapred.JobClient: map 100% reduce 29%
13/07/30 19:35:23 INFO mapred.JobClient: map 100% reduce 33%
13/07/30 19:35:27 INFO mapred.JobClient: map 100% reduce 100%
13/07/30 19:35:28 INFO mapred.JobClient: Job complete: job_201307301931_0001
13/07/30 19:35:28 INFO mapred.JobClient: Counters: 29
13/07/30 19:35:28 INFO mapred.JobClient: Job Counters
13/07/30 19:35:28 INFO mapred.JobClient: Launched reduce tasks=1
13/07/30 19:35:28 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=119572
13/07/30 19:35:28 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/07/30 19:35:28 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/07/30 19:35:28 INFO mapred.JobClient: Launched map tasks=18
13/07/30 19:35:28 INFO mapred.JobClient: Data-local map tasks=18
13/07/30 19:35:28 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=61226
13/07/30 19:35:28 INFO mapred.JobClient: File Output Format Counters
13/07/30 19:35:28 INFO mapred.JobClient: Bytes Written=725257
13/07/30 19:35:28 INFO mapred.JobClient: FileSystemCounters
13/07/30 19:35:28 INFO mapred.JobClient: FILE_BYTES_READ=6977160
13/07/30 19:35:28 INFO mapred.JobClient: HDFS_BYTES_READ=17600721
13/07/30 19:35:28 INFO mapred.JobClient: FILE_BYTES_WRITTEN=14994585
13/07/30 19:35:28 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=725257
13/07/30 19:35:28 INFO mapred.JobClient: File Input Format Counters
13/07/30 19:35:28 INFO mapred.JobClient: Bytes Read=17598630
13/07/30 19:35:28 INFO mapred.JobClient: Map-Reduce Framework
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 14
● Check the job output
hadoop dfs -ls /usr/hadoop/gutenberg-output
Found 3 items
-rw-r--r-- 3 hadoop supergroup 0 2013-07-30 19:35 /usr/hadoop/gutenberg-output/_SUCCESS
drwxr-xr-x - hadoop supergroup 0 2013-07-30 19:34 /usr/hadoop/gutenberg-output/_logs
-rw-r--r-- 3 hadoop supergroup 725257 2013-07-30 19:35 /usr/hadoop/gutenberg-output/part-r-00000
● Now get results out of HDFS
hadoop dfs -cat /usr/hadoop/gutenberg-output/part-r-00000 > /tmp/hrun/cluster_run.txt
head -10 /tmp/hrun/cluster_run.txt
"(Lo)cra" 6
"1490 6
"1498," 6
"35" 6
"40," 6
"A 12
"AS-IS". 6
"A_ 6
"Absoluti 6
"Alack! 6
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Install Step 15
● Congratulations – you now have
– A working HDFS cluster
– With three data nodes
– One name node
– Tested via a Map Reduce job
● Detailed install instructions available from our site shop
www.semtech-solutions.co.nz info@semtech-solutions.co.nz
Contact Us
● Feel free to contact us at
– www.semtech-solutions.co.nz
– info@semtech-solutions.co.nz
● We offer IT project consultancy
● We are happy to hear about your problems
● You can just pay for those hours that you need
● To solve your problems

Contenu connexe

Tendances

Hadoop Installation and basic configuration
Hadoop Installation and basic configurationHadoop Installation and basic configuration
Hadoop Installation and basic configuration
Gerrit van Vuuren
 
Out of the Box Replication in Postgres 9.4(pgconfsf)
Out of the Box Replication in Postgres 9.4(pgconfsf)Out of the Box Replication in Postgres 9.4(pgconfsf)
Out of the Box Replication in Postgres 9.4(pgconfsf)
Denish Patel
 

Tendances (20)

Hadoop Installation and basic configuration
Hadoop Installation and basic configurationHadoop Installation and basic configuration
Hadoop Installation and basic configuration
 
Run wordcount job (hadoop)
Run wordcount job (hadoop)Run wordcount job (hadoop)
Run wordcount job (hadoop)
 
Hadoop 3.1.1 single node
Hadoop 3.1.1 single nodeHadoop 3.1.1 single node
Hadoop 3.1.1 single node
 
Apache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exerciseApache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exercise
 
Configuration Surgery with Augeas
Configuration Surgery with AugeasConfiguration Surgery with Augeas
Configuration Surgery with Augeas
 
Out of the Box Replication in Postgres 9.4(pgconfsf)
Out of the Box Replication in Postgres 9.4(pgconfsf)Out of the Box Replication in Postgres 9.4(pgconfsf)
Out of the Box Replication in Postgres 9.4(pgconfsf)
 
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
 
Making Your Capistrano Recipe Book
Making Your Capistrano Recipe BookMaking Your Capistrano Recipe Book
Making Your Capistrano Recipe Book
 
Out of the box replication in postgres 9.4
Out of the box replication in postgres 9.4Out of the box replication in postgres 9.4
Out of the box replication in postgres 9.4
 
Light my-fuse
Light my-fuseLight my-fuse
Light my-fuse
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Prague
 
PGConf.ASIA 2019 Bali - Mission Critical Production High Availability Postgre...
PGConf.ASIA 2019 Bali - Mission Critical Production High Availability Postgre...PGConf.ASIA 2019 Bali - Mission Critical Production High Availability Postgre...
PGConf.ASIA 2019 Bali - Mission Critical Production High Availability Postgre...
 
Putting some "logic" in LVM.
Putting some "logic" in LVM.Putting some "logic" in LVM.
Putting some "logic" in LVM.
 
Vagrant, Ansible, and OpenStack on your laptop
Vagrant, Ansible, and OpenStack on your laptopVagrant, Ansible, and OpenStack on your laptop
Vagrant, Ansible, and OpenStack on your laptop
 
Ansible ex407 and EX 294
Ansible ex407 and EX 294Ansible ex407 and EX 294
Ansible ex407 and EX 294
 
CoreOS: Control Your Fleet
CoreOS: Control Your FleetCoreOS: Control Your Fleet
CoreOS: Control Your Fleet
 
IT Automation with Ansible
IT Automation with AnsibleIT Automation with Ansible
IT Automation with Ansible
 
Recipe of a linux Live CD (archived)
Recipe of a linux Live CD (archived)Recipe of a linux Live CD (archived)
Recipe of a linux Live CD (archived)
 
A Journey to Boot Linux on Raspberry Pi
A Journey to Boot Linux on Raspberry PiA Journey to Boot Linux on Raspberry Pi
A Journey to Boot Linux on Raspberry Pi
 
NFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center OperationsNFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center Operations
 

Similaire à An example Hadoop Install

mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
tutorialsruby
 
mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
tutorialsruby
 
mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
tutorialsruby
 
mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
tutorialsruby
 
Replication using PostgreSQL Replicator
Replication using PostgreSQL ReplicatorReplication using PostgreSQL Replicator
Replication using PostgreSQL Replicator
Command Prompt., Inc
 

Similaire à An example Hadoop Install (20)

mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
 
mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
 
mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
 
mapserver_install_linux
mapserver_install_linuxmapserver_install_linux
mapserver_install_linux
 
Operating Systems: Revision
Operating Systems: RevisionOperating Systems: Revision
Operating Systems: Revision
 
Deploying Foreman in Enterprise Environments
Deploying Foreman in Enterprise EnvironmentsDeploying Foreman in Enterprise Environments
Deploying Foreman in Enterprise Environments
 
Overview of Spark for HPC
Overview of Spark for HPCOverview of Spark for HPC
Overview of Spark for HPC
 
Qemu - Raspberry | while42 Singapore #2
Qemu - Raspberry | while42 Singapore #2Qemu - Raspberry | while42 Singapore #2
Qemu - Raspberry | while42 Singapore #2
 
Go replicator
Go replicatorGo replicator
Go replicator
 
Replication using PostgreSQL Replicator
Replication using PostgreSQL ReplicatorReplication using PostgreSQL Replicator
Replication using PostgreSQL Replicator
 
openbsd-as-nas.pdf
openbsd-as-nas.pdfopenbsd-as-nas.pdf
openbsd-as-nas.pdf
 
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky HaryadiPGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
 
New Jersey Red Hat Users Group Presentation: Provisioning anywhere
New Jersey Red Hat Users Group Presentation: Provisioning anywhereNew Jersey Red Hat Users Group Presentation: Provisioning anywhere
New Jersey Red Hat Users Group Presentation: Provisioning anywhere
 
2017-03-11 02 Денис Нелюбин. Docker & Ansible - лучшие друзья DevOps
2017-03-11 02 Денис Нелюбин. Docker & Ansible - лучшие друзья DevOps2017-03-11 02 Денис Нелюбин. Docker & Ansible - лучшие друзья DevOps
2017-03-11 02 Денис Нелюбин. Docker & Ansible - лучшие друзья DevOps
 
FOSDEM'17: Disaster Recovery Management with ReaR (relax-and-recover) & DRLM ...
FOSDEM'17: Disaster Recovery Management with ReaR (relax-and-recover) & DRLM ...FOSDEM'17: Disaster Recovery Management with ReaR (relax-and-recover) & DRLM ...
FOSDEM'17: Disaster Recovery Management with ReaR (relax-and-recover) & DRLM ...
 
Hadoop 2.x HDFS Cluster Installation (VirtualBox)
Hadoop 2.x  HDFS Cluster Installation (VirtualBox)Hadoop 2.x  HDFS Cluster Installation (VirtualBox)
Hadoop 2.x HDFS Cluster Installation (VirtualBox)
 
Keynote 1 - Engineering Software Analytics Studies
Keynote 1 - Engineering Software Analytics StudiesKeynote 1 - Engineering Software Analytics Studies
Keynote 1 - Engineering Software Analytics Studies
 
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
[EXTENDED] Ceph, Docker, Heroku Slugs, CoreOS and Deis Overview
 
Oracle cluster installation with grid and nfs
Oracle cluster  installation with grid and nfsOracle cluster  installation with grid and nfs
Oracle cluster installation with grid and nfs
 
Hadoop installation with an example
Hadoop installation with an exampleHadoop installation with an example
Hadoop installation with an example
 

Plus de Mike Frampton

An introduction to Apache Mesos
An introduction to Apache MesosAn introduction to Apache Mesos
An introduction to Apache Mesos
Mike Frampton
 
An introduction to Pentaho
An introduction to PentahoAn introduction to Pentaho
An introduction to Pentaho
Mike Frampton
 

Plus de Mike Frampton (20)

Apache Airavata
Apache AiravataApache Airavata
Apache Airavata
 
Apache MADlib AI/ML
Apache MADlib AI/MLApache MADlib AI/ML
Apache MADlib AI/ML
 
Apache MXNet AI
Apache MXNet AIApache MXNet AI
Apache MXNet AI
 
Apache Gobblin
Apache GobblinApache Gobblin
Apache Gobblin
 
Apache Singa AI
Apache Singa AIApache Singa AI
Apache Singa AI
 
Apache Ranger
Apache RangerApache Ranger
Apache Ranger
 
OrientDB
OrientDBOrientDB
OrientDB
 
Prometheus
PrometheusPrometheus
Prometheus
 
Apache Tephra
Apache TephraApache Tephra
Apache Tephra
 
Apache Kudu
Apache KuduApache Kudu
Apache Kudu
 
Apache Bahir
Apache BahirApache Bahir
Apache Bahir
 
Apache Arrow
Apache ArrowApache Arrow
Apache Arrow
 
JanusGraph DB
JanusGraph DBJanusGraph DB
JanusGraph DB
 
Apache Ignite
Apache IgniteApache Ignite
Apache Ignite
 
Apache Samza
Apache SamzaApache Samza
Apache Samza
 
Apache Flink
Apache FlinkApache Flink
Apache Flink
 
Apache Edgent
Apache EdgentApache Edgent
Apache Edgent
 
Apache CouchDB
Apache CouchDBApache CouchDB
Apache CouchDB
 
An introduction to Apache Mesos
An introduction to Apache MesosAn introduction to Apache Mesos
An introduction to Apache Mesos
 
An introduction to Pentaho
An introduction to PentahoAn introduction to Pentaho
An introduction to Pentaho
 

Dernier

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Dernier (20)

ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 

An example Hadoop Install

  • 1. Apache Hadoop Install Example ● Using Ubuntu 12.04 ● Java 1.6 ● Hadoop 1.2.0 ● Static DNS ● 3 Machine Cluster www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 2. Install Step 1 ● Install Ubuntu Linux 12.04 on each machine ● Assign a hostname and static IP address to each machine ● Names used here – hc1nn ( hadoop cluster 1 name node ) – hc1r1m1 ( hadoop cluster 1 rack 1 machine 1 ) – hc1r1m2 ( hadoop cluster 1 rack 1 machine 2 ) ● Install ssh daemon on each server ● Install vsftpd ( ftp ) deamon on each server ● Update /etc/host with all hostnames on each server www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 3. Install Step 2 ● Generate ssh keys for each server under hadoop user ● Copy keys to all server's hadoop account ● Install java 1.6 ( we used openjdk ) ● Obtain the Hadoop software from – hadoop.apache.org – Unpack Hadoop software to /usr/local ● Now consider cluster architecture www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 4. Install Step 3 ● Start will three single installs – For Hadoop ● Then cluster the – Hadoop machines www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 5. Install Step 4 ● Ensure auto shh – From name node (hc1nn) to both data nodes – From each machine to itself ● Create symbolic link – Named hadoop – Pointing to /usr/local/hadoop-1.2.0 ● Set up Bash .bashrc on each machine hadoop user set – HADOOP_HOME – JAVA_HOME www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 6. Install Step 5 ● Create Hadoop tmp dir on all servers sudo mkdir -p /app/hadoop/tmp sudo chown hadoop:hadoop /app/hadoop/tmp sudo chmod 750 /app/hadoop/tmp ● Set Up conf/core-site.xml – ( on all servers ) www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 7. Install Step 5 <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 8. Install Step 6 ● Set Up conf/mapred-site.xml – ( on all servers ) <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 9. Install Step 7 ● Set Up conf/hdfs-site.xml – ( on all servers ) <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 10. Install Step 8 ● Format the Hadoop file system ( on all servers ) – hadoop namenode -format – Dont do this on a running HDFS you will lose all data !! ● Now start Hadoop ( on all servers ) – $HADOOP_HOME/bin/start-all.sh ● Check Hadoop is running with – sudo netstat -plten | grep java – you should see ports like 54310 and 54311 being used. ● All Good ? Stop Hadoop on all servers – $HADOOP_HOME/bin/stop-all.sh www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 11. Install Step 9 ● Now set up the cluster – do on all servers ● Set $HADOOP_HOME/conf/masters file to contain – hc1nn ● Set $HADOOP_HOME/conf/slaves file to contain – hc1r1m1 – hc1r1m2 – hc1nn ● We will be using the name node as a data node as well www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 12. Install Step 10 on all machines ● Change conf/core-site.xml on all machines – fs.default.name = hdfs://hc1nn:54310 ● Change conf/mapred-site.xml – mapred.job.tracker = hc1nn:54311 ● Change conf/hdfs-site.xml – dfs.replication = 3 www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 13. Install Step 11 ● Now reformat the HDFS on hc1nn – hadoop namenode -format ● On name node start HDFS – $HADOOP_HOME/bin/start-dfs.sh ● On name node start Map Reduce – $HADOOP_HOME/bin/start-mapred.sh www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 14. Install Step 12 ● Run a test Map Reduce job – I have data in /tmp/gutenberg ● Load Data into HDFS hadoop dfs -copyFromLocal /tmp/gutenberg /usr/hadoop/gutenberg ● List Data in HDFS hadoop dfs -ls /usr/hadoop/gutenberg Found 18 items -rw-r--r-- 3 hadoop supergroup 674389 2013-07-30 19:31 /usr/hadoop/gutenberg/pg20417.txt -rw-r--r-- 3 hadoop supergroup 674389 2013-07-30 19:31 /usr/hadoop/gutenberg/pg20417.txt1 ............... -rw-r--r-- 3 hadoop supergroup 834980 2013-07-30 19:31 /usr/hadoop/gutenberg/pg5000.txt4 -rw-r--r-- 3 hadoop supergroup 834980 2013-07-30 19:31 /usr/hadoop/gutenberg/pg5000.txt5 www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 15. Install Step 13 ● Run the Map Reduce job cd $HADOOP_HOME hadoop jar hadoop*examples*.jar wordcount /usr/hduser/gutenberg /usr/hduser/gutenberg-output ● Check the output 13/07/30 19:34:13 INFO input.FileInputFormat: Total input paths to process : 18 13/07/30 19:34:13 INFO util.NativeCodeLoader: Loaded the native-hadoop library 13/07/30 19:34:14 INFO mapred.JobClient: Running job: job_201307301931_0001 13/07/30 19:34:15 INFO mapred.JobClient: map 0% reduce 0% 13/07/30 19:34:26 INFO mapred.JobClient: map 11% reduce 0% 13/07/30 19:34:34 INFO mapred.JobClient: map 16% reduce 0% 13/07/30 19:34:35 INFO mapred.JobClient: map 22% reduce 0% 13/07/30 19:34:42 INFO mapred.JobClient: map 33% reduce 0% 13/07/30 19:34:43 INFO mapred.JobClient: map 33% reduce 7% 13/07/30 19:34:48 INFO mapred.JobClient: map 44% reduce 7% 13/07/30 19:34:52 INFO mapred.JobClient: map 44% reduce 14% 13/07/30 19:34:54 INFO mapred.JobClient: map 55% reduce 14% 13/07/30 19:35:01 INFO mapred.JobClient: map 66% reduce 14% 13/07/30 19:35:02 INFO mapred.JobClient: map 66% reduce 18% 13/07/30 19:35:06 INFO mapred.JobClient: map 72% reduce 18% 13/07/30 19:35:07 INFO mapred.JobClient: map 77% reduce 18% 13/07/30 19:35:08 INFO mapred.JobClient: map 77% reduce 25% 13/07/30 19:35:12 INFO mapred.JobClient: map 88% reduce 25% www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 16. Install Step 13 13/07/30 19:35:17 INFO mapred.JobClient: map 88% reduce 29% 13/07/30 19:35:18 INFO mapred.JobClient: map 100% reduce 29% 13/07/30 19:35:23 INFO mapred.JobClient: map 100% reduce 33% 13/07/30 19:35:27 INFO mapred.JobClient: map 100% reduce 100% 13/07/30 19:35:28 INFO mapred.JobClient: Job complete: job_201307301931_0001 13/07/30 19:35:28 INFO mapred.JobClient: Counters: 29 13/07/30 19:35:28 INFO mapred.JobClient: Job Counters 13/07/30 19:35:28 INFO mapred.JobClient: Launched reduce tasks=1 13/07/30 19:35:28 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=119572 13/07/30 19:35:28 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 13/07/30 19:35:28 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 13/07/30 19:35:28 INFO mapred.JobClient: Launched map tasks=18 13/07/30 19:35:28 INFO mapred.JobClient: Data-local map tasks=18 13/07/30 19:35:28 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=61226 13/07/30 19:35:28 INFO mapred.JobClient: File Output Format Counters 13/07/30 19:35:28 INFO mapred.JobClient: Bytes Written=725257 13/07/30 19:35:28 INFO mapred.JobClient: FileSystemCounters 13/07/30 19:35:28 INFO mapred.JobClient: FILE_BYTES_READ=6977160 13/07/30 19:35:28 INFO mapred.JobClient: HDFS_BYTES_READ=17600721 13/07/30 19:35:28 INFO mapred.JobClient: FILE_BYTES_WRITTEN=14994585 13/07/30 19:35:28 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=725257 13/07/30 19:35:28 INFO mapred.JobClient: File Input Format Counters 13/07/30 19:35:28 INFO mapred.JobClient: Bytes Read=17598630 13/07/30 19:35:28 INFO mapred.JobClient: Map-Reduce Framework www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 17. Install Step 14 ● Check the job output hadoop dfs -ls /usr/hadoop/gutenberg-output Found 3 items -rw-r--r-- 3 hadoop supergroup 0 2013-07-30 19:35 /usr/hadoop/gutenberg-output/_SUCCESS drwxr-xr-x - hadoop supergroup 0 2013-07-30 19:34 /usr/hadoop/gutenberg-output/_logs -rw-r--r-- 3 hadoop supergroup 725257 2013-07-30 19:35 /usr/hadoop/gutenberg-output/part-r-00000 ● Now get results out of HDFS hadoop dfs -cat /usr/hadoop/gutenberg-output/part-r-00000 > /tmp/hrun/cluster_run.txt head -10 /tmp/hrun/cluster_run.txt "(Lo)cra" 6 "1490 6 "1498," 6 "35" 6 "40," 6 "A 12 "AS-IS". 6 "A_ 6 "Absoluti 6 "Alack! 6 www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 18. Install Step 15 ● Congratulations – you now have – A working HDFS cluster – With three data nodes – One name node – Tested via a Map Reduce job ● Detailed install instructions available from our site shop www.semtech-solutions.co.nz info@semtech-solutions.co.nz
  • 19. Contact Us ● Feel free to contact us at – www.semtech-solutions.co.nz – info@semtech-solutions.co.nz ● We offer IT project consultancy ● We are happy to hear about your problems ● You can just pay for those hours that you need ● To solve your problems