How to Install Hadoop with Step by Step Configuration on Linux Ubuntu

In this tutorial, we will take you through step by step process to install Apache Hadoop on a Linux box (Ubuntu). This is 2 part process

There are 2 Prerequisites

Part 1) Download and Install Hadoop

Step 1) Add a Hadoop system user using below command

sudo addgroup hadoop_

Download and Install Hadoop

sudo adduser --ingroup hadoop_ hduser_

Download and Install Hadoop

Enter your password, name and other details.

NOTE: There is a possibility of below-mentioned error in this setup and installation process.

“hduser is not in the sudoers file. This incident will be reported.”

Download and Install Hadoop

This error can be resolved by Login as a root user

Download and Install Hadoop

Execute the command

sudo adduser hduser_ sudo

Download and Install Hadoop

Re-login as hduser_

Download and Install Hadoop

Step 2) Configure SSH

In order to manage nodes in a cluster, Hadoop requires SSH access

First, switch user, enter the following command

su - hduser_

Download and Install Hadoop

This command will create a new key.

ssh-keygen -t rsa -P ""

Download and Install Hadoop

Enable SSH access to local machine using this key.

cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

Download and Install Hadoop

Now test SSH setup by connecting to localhost as ‘hduser’ user.

ssh localhost

Download and Install Hadoop

Note: Please note, if you see below error in response to ‘ssh localhost’, then there is a possibility that SSH is not available on this system-

Download and Install Hadoop

To resolve this –

Purge SSH using,

sudo apt-get purge openssh-server

It is good practice to purge before the start of installation

Download and Install Hadoop

Install SSH using the command-

sudo apt-get install openssh-server

Download and Install Hadoop

Step 3) Next step is to Download Hadoop

Download and Install Hadoop

Select Stable

Download and Install Hadoop

Select the tar.gz file ( not the file with src)

Download and Install Hadoop

Once a download is complete, navigate to the directory containing the tar file

Download and Install Hadoop

Enter,

sudo tar xzf hadoop-2.2.0.tar.gz

Download and Install Hadoop

Now, rename hadoop-2.2.0 as hadoop

sudo mv hadoop-2.2.0 hadoop

Download and Install Hadoop

sudo chown -R hduser_:hadoop_ hadoop

Download and Install Hadoop

Part 2) Configure Hadoop

Step 1) Modify ~/.bashrc file

Add following lines to end of file ~/.bashrc

#Set HADOOP_HOME
export HADOOP_HOME=<Installation Directory of Hadoop>
#Set JAVA_HOME
export JAVA_HOME=<Installation Directory of Java>
# Add bin/ directory of Hadoop to PATH
export PATH=$PATH:$HADOOP_HOME/bin

Configure Hadoop

Now, source this environment configuration using below command

. ~/.bashrc

Configure Hadoop

Step 2) Configurations related to HDFS

Set JAVA_HOME inside file $HADOOP_HOME/etc/hadoop/hadoop-env.sh

Configure Hadoop

Configure Hadoop

With

Configure Hadoop

There are two parameters in $HADOOP_HOME/etc/hadoop/core-site.xml which need to be set-

1. ‘hadoop.tmp.dir’ – Used to specify a directory which will be used by Hadoop to store its data files.

2. ‘fs.default.name’ – This specifies the default file system.

To set these parameters, open core-site.xml

sudo gedit $HADOOP_HOME/etc/hadoop/core-site.xml

Configure Hadoop

Copy below line in between tags <configuration></configuration>

<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>Parent directory for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS </name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. </description>
</property>

Configure Hadoop

Navigate to the directory $HADOOP_HOME/etc/Hadoop

Configure Hadoop

Now, create the directory mentioned in core-site.xml

sudo mkdir -p <Path of Directory used in above setting>

Configure Hadoop

Grant permissions to the directory

sudo chown -R hduser_:Hadoop_ <Path of Directory created in above step>

Configure Hadoop

sudo chmod 750 <Path of Directory created in above step>

Configure Hadoop

Step 3) Map Reduce Configuration

Before you begin with these configurations, lets set HADOOP_HOME path

sudo gedit /etc/profile.d/hadoop.sh

And Enter

export HADOOP_HOME=/home/guru99/Downloads/Hadoop

Configure Hadoop

Next enter

sudo chmod +x /etc/profile.d/hadoop.sh

Configure Hadoop

Exit the Terminal and restart again

Type echo $HADOOP_HOME. To verify the path

Configure Hadoop

Now copy files

sudo cp $HADOOP_HOME/etc/hadoop/mapred-site.xml.template $HADOOP_HOME/etc/hadoop/mapred-site.xml

Configure Hadoop

Open the mapred-site.xml file

sudo gedit $HADOOP_HOME/etc/hadoop/mapred-site.xml

Configure Hadoop

Add below lines of setting in between tags <configuration> and </configuration>

<property>
<name>mapreduce.jobtracker.address</name>
<value>localhost:54311</value>
<description>MapReduce job tracker runs at this host and port.
</description>
</property>

Configure Hadoop

Open $HADOOP_HOME/etc/hadoop/hdfs-site.xml as below,

sudo gedit $HADOOP_HOME/etc/hadoop/hdfs-site.xml

Configure Hadoop

Add below lines of setting between tags <configuration> and </configuration>

<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hduser_/hdfs</value>
</property>

Configure Hadoop

Create a directory specified in above setting-

sudo mkdir -p <Path of Directory used in above setting>
sudo mkdir -p /home/hduser_/hdfs

Configure Hadoop

sudo chown -R hduser_:hadoop_ <Path of Directory created in above step>
sudo chown -R hduser_:hadoop_ /home/hduser_/hdfs

Configure Hadoop

sudo chmod 750 <Path of Directory created in above step>
sudo chmod 750 /home/hduser_/hdfs

Configure Hadoop

Step 4) Before we start Hadoop for the first time, format HDFS using below command

$HADOOP_HOME/bin/hdfs namenode -format

Configure Hadoop

Step 5) Start Hadoop single node cluster using below command

$HADOOP_HOME/sbin/start-dfs.sh

An output of above command

Configure Hadoop

$HADOOP_HOME/sbin/start-yarn.sh

Configure Hadoop

Using ‘jps’ tool/command, verify whether all the Hadoop related processes are running or not.

Configure Hadoop

If Hadoop has started successfully then an output of jps should show NameNode, NodeManager, ResourceManager, SecondaryNameNode, DataNode.

Step 6) Stopping Hadoop

$HADOOP_HOME/sbin/stop-dfs.sh

Configure Hadoop

$HADOOP_HOME/sbin/stop-yarn.sh

Configure Hadoop