With the exponential growth of the World Wide Web over the years, the data being generated also grew exponentially. This led to a massive amount of data being created and it was being difficult to process and store this humungous amount of data with the traditional relational database systems.
Also, the data created was not only in the structured form but also in the unstructured format like videos, images, etc. This kind of data cannot be processed by relational databases. To counter these issues, Hadoop came into existence.
Before we dive into the data processing of Hadoop, let us have an overview of Hadoop and its components. Apache Hadoop is a framework that allows the storing and processing of huge quantities of data in a swift and efficient manner. It can be used to store huge quantities of structured and unstructured data. Learn more about hadoop ecosystem and components.
The pivotal building blocks of Hadoop are as follows: –
Table of Contents
Building Blocks of Hadoop
1. HDFS (The storage layer)
As the name suggests, Hadoop Distributed File System is the storage layer of Hadoop and is responsible for storing the data in a distributed environment (master and slave configuration). It splits the data into several blocks of data and stores them across different data nodes. These data blocks are also replicated across different data nodes to prevent loss of data when one of the nodes goes down.
It has two main processes running for processing of the data: –
It is running on the master machine. It saves the locations of all the files stored in the file system and tracks where the data resides across the cluster i.e. it stores the metadata of the files. When the client applications want to make certain operations on the data, it interacts with the NameNode. When the NameNode receives the request, it responds by returning a list of Data Node servers where the required data resides.
This process runs on every slave machine. One of its functionalities is to store each HDFS data block in a separate file in its local file system. In other words, it contains the actual data in form of blocks. It sends heartbeat signals periodically and waits for the request from the NameNode to access the data.
2. MapReduce (The processing layer)
It is a programming technique based on Java that is used on top of the Hadoop framework for faster processing of huge quantities of data. It processes this huge data in a distributed environment using many Data Nodes which enables parallel processing and faster execution of operations in a fault-tolerant way.
A MapReduce job splits the data set into multiple chunks of data which are further converted into key-value pairs in order to be processed by the mappers. The raw format of the data may not be suitable for processing. Thus, the input data compatible with the map phase is generated using the InputSplit function and RecordReader.
InputSplit is the logical representation of the data which is to be processed by an individual mapper. RecordReader converts these splits into records which take the form of key-value pairs. It basically converts the byte-oriented representation of the input into a record-oriented representation.
These records are then fed to the mappers for further processing the data. MapReduce jobs primarily consist of three phases namely the Map phase, the Shuffle phase, and the Reduce phase.
a. Map Phase
It is the first phase in the processing of the data. The main task in the map phase is to process each input from the RecordReader and convert it into intermediate tuples (key-value pairs). This intermediate output is stored in the local disk by the mappers.
The values of these key-value pairs can differ from the ones received as input from the RecordReader. The map phase can also contain combiners which are also called as local reducers. They perform aggregations on the data but only within the scope of one mapper.
As the computations are performed across different data nodes, it is essential that all the values associated with the same key are merged together into one reducer. This task is performed by the partitioner. It performs a hash function over these key-value pairs to merge them together.
It also ensures that all the tasks are partitioned evenly to the reducers. Partitioners generally come into the picture when we are working with more than one reducer.
b. Shuffle and Sort Phase
This phase transfers the intermediate output obtained from the mappers to the reducers. This process is called as shuffling. The output from the mappers is also sorted before transferring it to the reducers. The sorting is done on the basis of the keys in the key-value pairs. It helps the reducers to perform the computations on the data even before the entire data is received and eventually helps in reducing the time required for computations.
As the keys are sorted, whenever the reducer gets a different key as the input it starts to perform the reduce tasks on the previously received data.
c. Reduce Phase
The output of the map phase serves as an input to the reduce phase. It takes these key-value pairs and applies the reduce function on them to produce the desired result. The keys and the values associated with the key are passed on to the reduce function to perform certain operations.
We can filter the data or combine it to obtain the aggregated output. Post the execution of the reduce function, it can create zero or more key-value pairs. This result is written back in the Hadoop Distributed File System.
3. YARN (The management layer)
Yet Another Resource Navigator is the resource managing component of Hadoop. There are background processes running at each node (Node Manager on the slave machines and Resource Manager on the master node) that communicate with each other for the allocation of resources. The Resource Manager is the centrepiece of the YARN layer which manages resources among all the applications and passes on the requests to the Node Manager.
The Node Manager monitors the resource utilization like memory, CPU, and disk of the machine and conveys the same to the Resource Manager. It is installed on every Data Node and is responsible for executing the tasks on the Data Nodes.
Must Read: Top 10 Hadoop Tools for Big Data Engineers
The entire workflow for data processing on Hadoop can be summarised as follows: –
- InputSplit; logically splits the data which resides on HDFS into several blocks of data. The decision on how to split the data is done by the Inputformat.
- The data is converted into key-value pairs by RecordReader. RecordReader converts the byte-oriented data to record-oriented data. This data serves as the input to the mapper.
- The mapper, which is nothing but a user-defined function processes these key-values pairs and generates intermediate key-value pairs for further processing.
- These pairs are locally reduced (within the scope of one mapper) by the combiners to reduce the amount of data to be transferred from the mapper to the reducer.
- Partitioner ensures that all the values with the same key are merged together into the same reducer and that the tasks are evenly distributed amongst the reducers.
- These intermediate key-value pairs are then shuffled to the reducers and sorted on the basis of keys. This outcome is fed to the reducers as input.
- The reduce function aggregates the values for each key and the result is stored back into the HDFS using RecordWriter. Before writing it back to the HDFS, the format in which the data should be written is decided by the Outputformat.
If you are interested to know more about Big Data, check out our PG Diploma in Software Development Specialization in Big Data program which is designed for working professionals and provides 7+ case studies & projects, covers 14 programming languages & tools, practical hands-on workshops, more than 400 hours of rigorous learning & job placement assistance with top firms.
Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.