SlideShare une entreprise Scribd logo
1  sur  10
Apache Hadoop 2 Installation in Pseudo Mode
Download URL
1. Hadoop: https://archive.apache.org/dist/hadoop/core/stable/
2. Hive: http://people.apache.org/~hashutosh/hive-0.10.0-rc0/
3. Pig: http://ftp.udc.es/apache/pig/pig-0.12.0/
4. Hbase: http://archive.apache.org/dist/hbase/hbase-0.94.10/
Step 1: Generate ssh key
$ssh-keygen -t rsa -P “”
Step 2: Copy id_rsa.pub to authorized_keys
$cd .ssh
$cp id_rsa.pub authorized_keys
$chmod 644 authorized_keys
Step 3: Passwordless ssh to localhost
$cd ~
$ssh localhost
Step 4: Untar tarballs
$tar -xvzf hadoop-2.2.0.tar.gz
Step 5: Configuration files
$cd hadoop-2.2.0/etc/hadoop/
$vim core-site.xml
Add following properties in core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://172.17.196.14</value>
</property>
<property>
<name>io.native.lib.available</name>
<value>true</value>
</property>
$vim hdfs-site.xml
Add following property in hdfs-site.xml
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop-2.2.0/pseudo/dfs/data</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop-2.2.0/pseudo/dfs/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
$vim mapred-site.xml
Add following property in mapred-site.xml
<property>
<name>mapreduce.cluster.temp.dir</name>
<value>/home/hadoop/hadoop-2.2.0/temp</value>
<final>true</final>
</property>
<property>
<name>mapreduce.cluster.local.dir</name>
<value>/home/hadoop/hadoop-2.2.0/local</value>
<final>true</final>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
$vim yarn-site.xml
Add following property in yarn-site.xml
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>localhost:6000</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value> localhost:6001</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler<
/value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value> localhost:6002</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/hadoop/hadoop-2.2.0/yarn_nodemanager</value>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>0.0.0.0:6003</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>10240</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/home/hadoop/hadoop-2.2.0/app-logs</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/home/hadoop/hadoop-2.2.0/logs</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
$vim slaves
Add localhost in masters file
Step 6: set .bashrc
$cd ~
$vim .bashrc
export JAVA_HOME=/usr/
export HADOOP_HOME=/home/ahadoop2/hadoop-2.2.0
export HADOOP_CONF_DIR=/home/ahadoop2/hadoop-2.2.0/etc/hadoop
export PIG_HOME=/home/ahadoop2/pig-0.12.0
export HBASE_HOME=/home/ahadoop2/hbase-0.96.0-hadoop2
export HIVE_HOME=/home/ahadoop2/hive-0.11.0
export PIG_CLASSPATH=/home/ahadoop2/hadoop-2.2.0/etc/hadoop
export CLASSPATH=$PIG_HOME/pig-withouthadoop.jar:
$HADOOP_HOME/share/hadoop/common/hadoop-common-2.2.0.jar:
$HADOOP_HOME/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:$HBASE_HOME/lib/hbase-client-
0.96.0-hadoop2.jar:$HBASE_HOME/lib/hbase-common-0.96.0-hadoop2.jar:
$HBASE_HOME/lib/hbase-server-0.96.0-hadoop2.jar:$HBASE_HOME/lib/commons-httpclient-
3.1.jar:$HBASE_HOME/lib/commons-collections-3.2.1.jar:$HBASE_HOME/lib/commons-lang-
2.6.jar:$HBASE_HOME/lib/jackson-mapper-asl-1.8.8.jar:$HBASE_HOME/lib/jackson-core-asl-
1.8.8.jar:$HBASE_HOME/lib/guava-12.0.1.jar:$HBASE_HOME/lib/protobuf-java-2.5.0.jar:
$HBASE_HOME/lib/commons-codec-1.7.jar:$HBASE_HOME/lib/zookeeper-3.4.5.jar:
$HIVE_HOME/lib/hive-jdbc-0.11.0.jar:$HIVE_HOME/lib/hive-metastore-0.11.0.jar:
$HIVE_HOME/lib/hive-serde-0.11.0.jar:$HIVE_HOME/lib/hive-common-0.11.0.jar:
$HIVE_HOME/lib/hive-service-0.11.0.jar:$HIVE_HOME/lib/libfb303-0.9.0.jar:
$HIVE_HOME/lib/postgresql-9.2-1003.jdbc3.jar:$HIVE_HOME/lib/libthrift-0.9.0.jar:
$HIVE_HOME/lib/slf4j-api-1.6.1.jar:$HIVE_HOME/lib/commons-logging-
1.0.4.jar:/home/ahadoop2/Hadoop2Training.jar
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PIG_HOME/bin:$HBASE_HOME/bin:
$HIVE_HOME/bin:/bin:/usr/lib64/qt-
3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:
Step 7: Load .bashrc
$cd ~
$. .bashrc
Step 8: Formatting the name node
$cd ~
$hadoop namenode -format
Step 9: Starting Cluster
$cd ~/hadoop-2.2.0/sbin
$ ./start-all.sh
To view the started daemons
$ jps
This should show the started daemons.
NameNode
DataNode
SecondaryNamenode
Nodemanager
ResourceManager
Apache Hbase Installation in Pseudo Mode
Step 1: Untar the tarballs
$tar -xvzf hbase-0.96.0-hadoop2.tar.gz
Step 2: Configuration files
$cd hbase-0.96.0-hadoop2/conf
$vim hbase-site.xml
Copy following properties in hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
<description>The directory shared by RegionServers</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
$vim regionservers
Add localhost in regionservers file
Step 3: Add hadoop jars from hadoop directory to hbase lib directory
$cd /home/hadoop/hadoop-2.2.0/share/hadoop/common/
$cp hadoop-common-2.2.0.jar /home/hadoop/hbase-0.96.0-hadoop2/lib/
Step 4: start hbase
$cd ~
$start-hbase.sh
Step 5: To view the started daemons
$ jps
Hmaster
Hregionserver
Hquorumpeer
Step 6: To view hbase shell
$hbase shell
Step 7: Before connecting to hbase using java
Start hbase rest service by executing following command
$hbase-daemon.sh start rest -p 8090
Apache Hive Installation
Step 1: Untar the tarballs
$tar -xvzf hive-0.11.0.tar.gz
Step 2: Configuring a remote PostgreSQL database for the Hive Metastore
Before you can run the Hive metastore with a remote PostgreSQL database, you must configure a
connector to the remote PostgreSQL database, set up the initial database schema, and configure the
PostgreSQL user account for the Hive user.
Install and start PostgreSQL if you have not already done so you need to edit the postgresql.conf
file. Set the listen property to * to make sure that the Configure authentication for your network in
pg_hba.conf. Add a new line into pg_hba.con that has the following information:
Start PostgreSQL Server
$ su postgres
$cd $postgres_home/bin
$./pg_ctl start -D path_to_data_dir
Install the Postgres JDBC Driver
Copy postgresql-jdbc driver in $HIVE_HOME/lib/
Create the metastore database and user account
Proceed as in the following example:
bash# sudo –u postgres psql
bash$ psql
postgres=# CREATE USER hiveuser WITH PASSWORD 'mypassword';
postgres=# CREATE DATABASE metastore;
postgres=# exit;
bash# sudo –u hiveuser metastore
You are now connected to database 'metastore' with hiveuser.
metastore=# i /home/hadoop/hive-0.11.0/scripts/metastore/upgrade/postgres/hive-schema-
0.10.0.postgres.sql
Step 3: Configuration files
$cd hive-0.11.0/conf
$vim hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://<postgresql instance ip>:5432/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mypassword</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://<namenode ip>:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore
host</description>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
</configuration>
Step 4: Strat hive metastore
$hive --service metastore
Step 5: To view hive console
$hive
hive>show tables;
OK
Step 6: Before connecting to hive using java
Start hiveserver by executing following command
$hive --service hiveserver
Apache pig installation
Step 1: Untar the tarballs
$tar -xvzf pig-0.12.0.tar.gz
Step 2: Delete two jars (pig and pig-without hadoop jar) from pig home directory and add pig-
withouthadoop.jar in pig installation directory (Uploaded in knowmax same path)
Step 3: To open pig grunt
$pig
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://<namenode ip>:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore
host</description>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
</configuration>
Step 4: Strat hive metastore
$hive --service metastore
Step 5: To view hive console
$hive
hive>show tables;
OK
Step 6: Before connecting to hive using java
Start hiveserver by executing following command
$hive --service hiveserver
Apache pig installation
Step 1: Untar the tarballs
$tar -xvzf pig-0.12.0.tar.gz
Step 2: Delete two jars (pig and pig-without hadoop jar) from pig home directory and add pig-
withouthadoop.jar in pig installation directory (Uploaded in knowmax same path)
Step 3: To open pig grunt
$pig

Contenu connexe

Tendances

Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2
Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2
Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2Jeffrey Breen
 
Ansible : what's ansible & use case by REX
Ansible :  what's ansible & use case by REXAnsible :  what's ansible & use case by REX
Ansible : what's ansible & use case by REXSaewoong Lee
 
Czym jest webpack i dlaczego chcesz go używać?
Czym jest webpack i dlaczego chcesz go używać?Czym jest webpack i dlaczego chcesz go używać?
Czym jest webpack i dlaczego chcesz go używać?Marcin Gajda
 
PyCon AU 2010 - Getting Started With Apache/mod_wsgi.
PyCon AU 2010 - Getting Started With Apache/mod_wsgi.PyCon AU 2010 - Getting Started With Apache/mod_wsgi.
PyCon AU 2010 - Getting Started With Apache/mod_wsgi.Graham Dumpleton
 
Fixing Growing Pains With Puppet Data Patterns
Fixing Growing Pains With Puppet Data PatternsFixing Growing Pains With Puppet Data Patterns
Fixing Growing Pains With Puppet Data PatternsMartin Jackson
 
Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...
Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...
Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...Puppet
 
Drupal Performance - SerBenfiquista.com Case Study
Drupal Performance - SerBenfiquista.com Case StudyDrupal Performance - SerBenfiquista.com Case Study
Drupal Performance - SerBenfiquista.com Case Studyhernanibf
 
Python And My Sq Ldb Module
Python And My Sq Ldb ModulePython And My Sq Ldb Module
Python And My Sq Ldb ModuleAkramWaseem
 
Ansible leveraging 2.0
Ansible leveraging 2.0Ansible leveraging 2.0
Ansible leveraging 2.0bcoca
 
Facebook的缓存系统
Facebook的缓存系统Facebook的缓存系统
Facebook的缓存系统yiditushe
 
AnsibleFest 2014 - Role Tips and Tricks
AnsibleFest 2014 - Role Tips and TricksAnsibleFest 2014 - Role Tips and Tricks
AnsibleFest 2014 - Role Tips and Tricksjimi-c
 
IT Infrastructure Through The Public Network Challenges And Solutions
IT Infrastructure Through The Public Network   Challenges And SolutionsIT Infrastructure Through The Public Network   Challenges And Solutions
IT Infrastructure Through The Public Network Challenges And SolutionsMartin Jackson
 
Burn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websitesBurn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websitesLindsay Holmwood
 
Automated infrastructure is on the menu
Automated infrastructure is on the menuAutomated infrastructure is on the menu
Automated infrastructure is on the menujtimberman
 
More tips n tricks
More tips n tricksMore tips n tricks
More tips n tricksbcoca
 
Powerful and flexible templates with Twig
Powerful and flexible templates with Twig Powerful and flexible templates with Twig
Powerful and flexible templates with Twig Michael Peacock
 
SCasia 2018 MSFT hands on session for Azure Batch AI
SCasia 2018 MSFT hands on session for Azure Batch AISCasia 2018 MSFT hands on session for Azure Batch AI
SCasia 2018 MSFT hands on session for Azure Batch AIHiroshi Tanaka
 

Tendances (20)

Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2
Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2
Big Data Step-by-Step: Infrastructure 2/3: Running R and RStudio on EC2
 
Ansible : what's ansible & use case by REX
Ansible :  what's ansible & use case by REXAnsible :  what's ansible & use case by REX
Ansible : what's ansible & use case by REX
 
Build Automation 101
Build Automation 101Build Automation 101
Build Automation 101
 
Czym jest webpack i dlaczego chcesz go używać?
Czym jest webpack i dlaczego chcesz go używać?Czym jest webpack i dlaczego chcesz go używać?
Czym jest webpack i dlaczego chcesz go używać?
 
Dc kyiv2010 jun_08
Dc kyiv2010 jun_08Dc kyiv2010 jun_08
Dc kyiv2010 jun_08
 
PyCon AU 2010 - Getting Started With Apache/mod_wsgi.
PyCon AU 2010 - Getting Started With Apache/mod_wsgi.PyCon AU 2010 - Getting Started With Apache/mod_wsgi.
PyCon AU 2010 - Getting Started With Apache/mod_wsgi.
 
Puppet
PuppetPuppet
Puppet
 
Fixing Growing Pains With Puppet Data Patterns
Fixing Growing Pains With Puppet Data PatternsFixing Growing Pains With Puppet Data Patterns
Fixing Growing Pains With Puppet Data Patterns
 
Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...
Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...
Puppet Camp Boston 2014: Greenfield Puppet: Getting it right from the start (...
 
Drupal Performance - SerBenfiquista.com Case Study
Drupal Performance - SerBenfiquista.com Case StudyDrupal Performance - SerBenfiquista.com Case Study
Drupal Performance - SerBenfiquista.com Case Study
 
Python And My Sq Ldb Module
Python And My Sq Ldb ModulePython And My Sq Ldb Module
Python And My Sq Ldb Module
 
Ansible leveraging 2.0
Ansible leveraging 2.0Ansible leveraging 2.0
Ansible leveraging 2.0
 
Facebook的缓存系统
Facebook的缓存系统Facebook的缓存系统
Facebook的缓存系统
 
AnsibleFest 2014 - Role Tips and Tricks
AnsibleFest 2014 - Role Tips and TricksAnsibleFest 2014 - Role Tips and Tricks
AnsibleFest 2014 - Role Tips and Tricks
 
IT Infrastructure Through The Public Network Challenges And Solutions
IT Infrastructure Through The Public Network   Challenges And SolutionsIT Infrastructure Through The Public Network   Challenges And Solutions
IT Infrastructure Through The Public Network Challenges And Solutions
 
Burn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websitesBurn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websites
 
Automated infrastructure is on the menu
Automated infrastructure is on the menuAutomated infrastructure is on the menu
Automated infrastructure is on the menu
 
More tips n tricks
More tips n tricksMore tips n tricks
More tips n tricks
 
Powerful and flexible templates with Twig
Powerful and flexible templates with Twig Powerful and flexible templates with Twig
Powerful and flexible templates with Twig
 
SCasia 2018 MSFT hands on session for Azure Batch AI
SCasia 2018 MSFT hands on session for Azure Batch AISCasia 2018 MSFT hands on session for Azure Batch AI
SCasia 2018 MSFT hands on session for Azure Batch AI
 

En vedette

How to analyze_table_through_informatica
How to analyze_table_through_informaticaHow to analyze_table_through_informatica
How to analyze_table_through_informaticasushantbit04
 
Two single node cluster to one multinode cluster
Two single node cluster to one multinode clusterTwo single node cluster to one multinode cluster
Two single node cluster to one multinode clustersushantbit04
 
WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介
WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介
WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介Takao TAGAWA
 
The_factories_act__1948
The_factories_act__1948The_factories_act__1948
The_factories_act__1948sushantbit04
 
Characteristics_of_the_database_system
Characteristics_of_the_database_systemCharacteristics_of_the_database_system
Characteristics_of_the_database_systemsushantbit04
 
Business cases made_simple
Business cases made_simpleBusiness cases made_simple
Business cases made_simpleAmit Rastogi
 
A_human_resource_information_system
A_human_resource_information_systemA_human_resource_information_system
A_human_resource_information_systemsushantbit04
 
オープンデータデイ2015 記者説明会 発表資料
オープンデータデイ2015 記者説明会 発表資料オープンデータデイ2015 記者説明会 発表資料
オープンデータデイ2015 記者説明会 発表資料Nozomu Shoji
 
Employees_state_insurance_act
Employees_state_insurance_actEmployees_state_insurance_act
Employees_state_insurance_actsushantbit04
 
(C) Etiquette @ Work
(C)   Etiquette @ Work(C)   Etiquette @ Work
(C) Etiquette @ WorkAmit Rastogi
 
Industrial and labour laws (comprehensive)
Industrial and labour laws (comprehensive)Industrial and labour laws (comprehensive)
Industrial and labour laws (comprehensive)sushantbit04
 

En vedette (17)

Steps To Success
Steps To SuccessSteps To Success
Steps To Success
 
Tiny Frogs
Tiny FrogsTiny Frogs
Tiny Frogs
 
How to analyze_table_through_informatica
How to analyze_table_through_informaticaHow to analyze_table_through_informatica
How to analyze_table_through_informatica
 
Two single node cluster to one multinode cluster
Two single node cluster to one multinode clusterTwo single node cluster to one multinode cluster
Two single node cluster to one multinode cluster
 
WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介
WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介
WCAN 2013 Winter LT オープンソースのPHP製 汎用メールフォームシステムTransmitMail 2のご紹介
 
The_factories_act__1948
The_factories_act__1948The_factories_act__1948
The_factories_act__1948
 
Hris_notes
Hris_notesHris_notes
Hris_notes
 
Characteristics_of_the_database_system
Characteristics_of_the_database_systemCharacteristics_of_the_database_system
Characteristics_of_the_database_system
 
Business cases made_simple
Business cases made_simpleBusiness cases made_simple
Business cases made_simple
 
Case2 _layoff
Case2 _layoffCase2 _layoff
Case2 _layoff
 
A_human_resource_information_system
A_human_resource_information_systemA_human_resource_information_system
A_human_resource_information_system
 
オープンデータデイ2015 記者説明会 発表資料
オープンデータデイ2015 記者説明会 発表資料オープンデータデイ2015 記者説明会 発表資料
オープンデータデイ2015 記者説明会 発表資料
 
Dbms_class _14
Dbms_class _14Dbms_class _14
Dbms_class _14
 
Employees_state_insurance_act
Employees_state_insurance_actEmployees_state_insurance_act
Employees_state_insurance_act
 
Labour_law
Labour_lawLabour_law
Labour_law
 
(C) Etiquette @ Work
(C)   Etiquette @ Work(C)   Etiquette @ Work
(C) Etiquette @ Work
 
Industrial and labour laws (comprehensive)
Industrial and labour laws (comprehensive)Industrial and labour laws (comprehensive)
Industrial and labour laws (comprehensive)
 

Similaire à Apache hadoop 2_installation

Hadoop cluster 安裝
Hadoop cluster 安裝Hadoop cluster 安裝
Hadoop cluster 安裝recast203
 
Installing hive on ubuntu 16
Installing hive on ubuntu 16Installing hive on ubuntu 16
Installing hive on ubuntu 16Enrique Davila
 
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
 
Commands documentaion
Commands documentaionCommands documentaion
Commands documentaionTejalNijai
 
Hadoop installation on windows
Hadoop installation on windows Hadoop installation on windows
Hadoop installation on windows habeebulla g
 
Hadoop single node setup
Hadoop single node setupHadoop single node setup
Hadoop single node setupMohammad_Tariq
 
Hadoop installation
Hadoop installationHadoop installation
Hadoop installationhabeebulla g
 
Configure h base hadoop and hbase client
Configure h base hadoop and hbase clientConfigure h base hadoop and hbase client
Configure h base hadoop and hbase clientShashwat Shriparv
 
R hive tutorial supplement 2 - Installing Hive
R hive tutorial supplement 2 - Installing HiveR hive tutorial supplement 2 - Installing Hive
R hive tutorial supplement 2 - Installing HiveAiden Seonghak Hong
 
DevOps Hackathon: Session 3 - Test Driven Infrastructure
DevOps Hackathon: Session 3 - Test Driven InfrastructureDevOps Hackathon: Session 3 - Test Driven Infrastructure
DevOps Hackathon: Session 3 - Test Driven InfrastructureAntons Kranga
 
8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker8a. How To Setup HBase with Docker
8a. How To Setup HBase with DockerFabio Fumarola
 
Puppet for Java developers - JavaZone NO 2012
Puppet for Java developers - JavaZone NO 2012Puppet for Java developers - JavaZone NO 2012
Puppet for Java developers - JavaZone NO 2012Carlos Sanchez
 
Apache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exerciseApache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exerciseShiva Rama Krishna Dasharathi
 
Docker for Ruby Developers
Docker for Ruby DevelopersDocker for Ruby Developers
Docker for Ruby DevelopersAptible
 
Content server installation guide
Content server installation guideContent server installation guide
Content server installation guideNaveed Bashir
 
From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012Carlos Sanchez
 
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...NETWAYS
 

Similaire à Apache hadoop 2_installation (20)

Hadoop cluster 安裝
Hadoop cluster 安裝Hadoop cluster 安裝
Hadoop cluster 安裝
 
Ex-8-hive.pptx
Ex-8-hive.pptxEx-8-hive.pptx
Ex-8-hive.pptx
 
Installing hive on ubuntu 16
Installing hive on ubuntu 16Installing hive on ubuntu 16
Installing hive on ubuntu 16
 
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013
 
Commands documentaion
Commands documentaionCommands documentaion
Commands documentaion
 
Hadoop installation on windows
Hadoop installation on windows Hadoop installation on windows
Hadoop installation on windows
 
Hadoop completereference
Hadoop completereferenceHadoop completereference
Hadoop completereference
 
Hadoop single node setup
Hadoop single node setupHadoop single node setup
Hadoop single node setup
 
Hadoop installation
Hadoop installationHadoop installation
Hadoop installation
 
Configure h base hadoop and hbase client
Configure h base hadoop and hbase clientConfigure h base hadoop and hbase client
Configure h base hadoop and hbase client
 
R hive tutorial supplement 2 - Installing Hive
R hive tutorial supplement 2 - Installing HiveR hive tutorial supplement 2 - Installing Hive
R hive tutorial supplement 2 - Installing Hive
 
DevOps Hackathon: Session 3 - Test Driven Infrastructure
DevOps Hackathon: Session 3 - Test Driven InfrastructureDevOps Hackathon: Session 3 - Test Driven Infrastructure
DevOps Hackathon: Session 3 - Test Driven Infrastructure
 
8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker
 
Puppet for Java developers - JavaZone NO 2012
Puppet for Java developers - JavaZone NO 2012Puppet for Java developers - JavaZone NO 2012
Puppet for Java developers - JavaZone NO 2012
 
Lumen
LumenLumen
Lumen
 
Apache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exerciseApache Hadoop & Hive installation with movie rating exercise
Apache Hadoop & Hive installation with movie rating exercise
 
Docker for Ruby Developers
Docker for Ruby DevelopersDocker for Ruby Developers
Docker for Ruby Developers
 
Content server installation guide
Content server installation guideContent server installation guide
Content server installation guide
 
From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012
 
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
 

Apache hadoop 2_installation