Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconBig Databreadcumb forward arrow iconData Processing In Hadoop: Hadoop Components Explained [2024]

Data Processing In Hadoop: Hadoop Components Explained [2024]

Last updated:
2nd Oct, 2022
Views
Read Time
8 Mins
share image icon
In this article
Chevron in toc
View All
Data Processing In Hadoop: Hadoop Components Explained [2024]

With the exponential growth of the World Wide Web over the years, the data being generated also grew exponentially. This led to a massive amount of data being created and it was being difficult to process and store this humungous amount of data with the traditional relational database systems.

Also, the data created was not only in the structured form but also in the unstructured format like videos, images, etc. This kind of data cannot be processed by relational databases. To counter these issues, Hadoop came into existence.

Before we dive into the data processing of Hadoop, let us have an overview of Hadoop and its components. Apache Hadoop is a framework that allows the storing and processing of huge quantities of data in a swift and efficient manner. It can be used to store huge quantities of structured and unstructured data. Learn more about hadoop ecosystem and components.

The pivotal building blocks of Hadoop are as follows: – 

Ads of upGrad blog

Building Blocks of Hadoop

1. HDFS (The storage layer)

As the name suggests, Hadoop Distributed File System is the storage layer of Hadoop and is responsible for storing the data in a distributed environment (master and slave configuration). It splits the data into several blocks of data and stores them across different data nodes. These data blocks are also replicated across different data nodes to prevent loss of data when one of the nodes goes down.

It has two main processes running for processing of the data: –

a. NameNode

 It is running on the master machine. It saves the locations of all the files stored in the file system and tracks where the data resides across the cluster i.e. it stores the metadata of the files. When the client applications want to make certain operations on the data, it interacts with the NameNode. When the NameNode receives the request, it responds by returning a list of Data Node servers where the required data resides.

Explore our Popular Software Engineering Courses

b. DataNode

This process runs on every slave machine. One of its functionalities is to store each HDFS data block in a separate file in its local file system. In other words, it contains the actual data in form of blocks. It sends heartbeat signals periodically and waits for the request from the NameNode to access the data.

2. MapReduce (The processing layer)

It is a programming technique based on Java that is used on top of the Hadoop framework for faster processing of huge quantities of data. It processes this huge data in a distributed environment using many Data Nodes which enables parallel processing and faster execution of operations in a fault-tolerant way.

A MapReduce job splits the data set into multiple chunks of data which are further converted into key-value pairs in order to be processed by the mappers. The raw format of the data may not be suitable for processing. Thus, the input data compatible with the map phase is generated using the InputSplit function and RecordReader.

InputSplit is the logical representation of the data which is to be processed by an individual mapper. RecordReader converts these splits into records which take the form of key-value pairs. It basically converts the byte-oriented representation of the input into a record-oriented representation.

These records are then fed to the mappers for further processing the data. MapReduce jobs primarily consist of three phases namely the Map phase, the Shuffle phase, and the Reduce phase.

In-Demand Software Development Skills

a. Map Phase

It is the first phase in the processing of the data. The main task in the map phase is to process each input from the RecordReader and convert it into intermediate tuples (key-value pairs). This intermediate output is stored in the local disk by the mappers.

The values of these key-value pairs can differ from the ones received as input from the RecordReader. The map phase can also contain combiners which are also called as local reducers. They perform aggregations on the data but only within the scope of one mapper.

As the computations are performed across different data nodes, it is essential that all the values associated with the same key are merged together into one reducer. This task is performed by the partitioner. It performs a hash function over these key-value pairs to merge them together.

It also ensures that all the tasks are partitioned evenly to the reducers. Partitioners generally come into the picture when we are working with more than one reducer.

b. Shuffle and Sort Phase

This phase transfers the intermediate output obtained from the mappers to the reducers. This process is called as shuffling. The output from the mappers is also sorted before transferring it to the reducers. The sorting is done on the basis of the keys in the key-value pairs. It helps the reducers to perform the computations on the data even before the entire data is received and eventually helps in reducing the time required for computations.

As the keys are sorted, whenever the reducer gets a different key as the input it starts to perform the reduce tasks on the previously received data.

c. Reduce Phase

The output of the map phase serves as an input to the reduce phase. It takes these key-value pairs and applies the reduce function on them to produce the desired result. The keys and the values associated with the key are passed on to the reduce function to perform certain operations.

We can filter the data or combine it to obtain the aggregated output. Post the execution of the reduce function, it can create zero or more key-value pairs. This result is written back in the Hadoop Distributed File System. 

3. YARN (The management layer)

Yet Another Resource Navigator is the resource managing component of Hadoop. There are background processes running at each node (Node Manager on the slave machines and Resource Manager on the master node) that communicate with each other for the allocation of resources. The Resource Manager is the centrepiece of the YARN layer which manages resources among all the applications and passes on the requests to the Node Manager.

The Node Manager monitors the resource utilization like memory, CPU, and disk of the machine and conveys the same to the Resource Manager. It is installed on every Data Node and is responsible for executing the tasks on the Data Nodes.

Explore Our Software Development Free Courses

Must Read: Top 10 Hadoop Tools for Big Data Engineers

Conclusion 

Ads of upGrad blog

The entire workflow for data processing on Hadoop can be summarised as follows: – 

  • InputSplit; logically splits the data which resides on HDFS into several blocks of data. The decision on how to split the data is done by the Inputformat.
  • The data is converted into key-value pairs by RecordReader. RecordReader converts the byte-oriented data to record-oriented data. This data serves as the input to the mapper.
  • The mapper, which is nothing but a user-defined function processes these key-values pairs and generates intermediate key-value pairs for further processing.
  • These pairs are locally reduced (within the scope of one mapper) by the combiners to reduce the amount of data to be transferred from the mapper to the reducer.
  • Partitioner ensures that all the values with the same key are merged together into the same reducer and that the tasks are evenly distributed amongst the reducers.
  • These intermediate key-value pairs are then shuffled to the reducers and sorted on the basis of keys. This outcome is fed to the reducers as input.
  • The reduce function aggregates the values for each key and the result is stored back into the HDFS using RecordWriter. Before writing it back to the HDFS, the format in which the data should be written is decided by the Outputformat.

If you are interested to know more about Big Data, check out our Advanced Certificate Programme in Big Data from IIIT Bangalore.

Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.
Get Free Consultation

Selectcaret down icon
Select Area of interestcaret down icon
Select Work Experiencecaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Big Data Course

Frequently Asked Questions (FAQs)

1Between Hadoop and MapReduce, which one is a better choice?

Hadoop uses a storage framework to store data. Moreover, it also helps in creating name nodes and data nodes. Apache Hadoop is built on software that makes data distribution and processing a hassle-free task. It uses simple programming to conduct all its operations related to data. Furthermore, it also integrates with MapReduce. On the other hand, MapReduce is mainly a programming-oriented framework that allows the sorting and processing of data using key-value pairs. Its programming model is generally used to implement, generate, and process big data sets that work on a distributed algorithm. Hadoop is open-source and its clusters are scalable. MapReduce offers high availability and fault tolerance. MapReduce works on Java programming language, whereas Hadoop uses multiple programming languages depending on the module.

2What is the hardware configuration of Namenode and Datanode?

The hardware configuration of a node depends on a number of factors and varies from one node to another. Depending on the extensive use of clusters, the configurations are designed accordingly. The Namenode configuration uses 2 Quad-Core CPUs running at 2 GHZ processors with an in-built RAM of 128 GB. It operates on 10 GB Ethernet and has a disk space of 6 TB Serial ATA. Datanode also uses 2 Quad-Core CPUs running at 2 GHZ processors with an in-built RAM of 64 GB. It operates on 10 GB Ethernet and has a disk space of 24 TB Serial ATA.

3What are some of the techniques for MapReduce job optimization?

First and foremost, proper cluster configuration is necessary to improve input-output performance. It is also important to keep a cursory check on the graphs, network usage reports, and performance metrics. Plus, the hard drive needs to be constantly monitored to analyze their health. LZO compression usage is another great technique for MapReduce job optimization wherein the LZO will benefit from the map outfit that Hadoop jobs will create. LZO could be a trouble for the CPU, but it uses other reduction techniques that fit well.

Explore Free Courses

Suggested Blogs

50 Must Know Big Data Interview Questions and Answers 2024: For Freshers & Experienced
8363
Introduction The demand for potential candidates is increasing rapidly in the big data technologies field. There are plenty of opportunities in this
Read More

by Mohit Soni

Top 6 Major Challenges of Big Data & Simple Solutions To Solve Them
103400
No organization today can operate effectively without data. Data, generated incessantly from various sources like business transactions, sales records
Read More

by Rohit Sharma

17 Jun 2024

13 Best Big Data Project Ideas & Topics for Beginners [2024]
102459
Big Data Project Ideas Big Data is an exciting subject. It helps you find patterns and results you wouldn’t have noticed otherwise. This skill
Read More

by upGrad

29 May 2024

Characteristics of Big Data: Types & 5V’s
7238
Introduction The world around is changing rapidly, we live a data-driven age now. Data is everywhere, from your social media comments, posts, and lik
Read More

by Rohit Sharma

04 May 2024

Top 10 Hadoop Commands [With Usages]
12435
In this era, with huge chunks of data, it becomes essential to deal with them. The data springing from organizations with growing customers is way lar
Read More

by Rohit Sharma

12 Apr 2024

What is Big Data – Characteristics, Types, Benefits & Examples
187104
Lately the term ‘Big Data’ has been under the limelight, but not many people know what is big data. Businesses, governmental institutions, HCPs (Healt
Read More

by Abhinav Rai

18 Feb 2024

Cassandra vs MongoDB: Difference Between Cassandra & MongoDB [2023]
5546
Introduction Cassandra and MongoDB are among the most famous NoSQL databases used by large to small enterprises and can be relied upon for scalabilit
Read More

by Rohit Sharma

31 Jan 2024

Be A Big Data Analyst – Skills, Salary & Job Description
899975
In an era dominated by Big Data, one cannot imagine that the skill set and expertise of traditional Data Analysts are enough to handle the complexitie
Read More

by upGrad

16 Dec 2023

12 Exciting Hadoop Project Ideas & Topics For Beginners [2024]
21452
Hadoop Project Ideas & Topics Today, big data technologies power diverse sectors, from banking and finance, IT and telecommunication, to manufact
Read More

by Rohit Sharma

29 Nov 2023

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon