The fundamental objective of this Hadoop Partitioner tutorial is to give you a point by point definition of every part that is utilized in Hadoop. In this post, we are going to cover the meaning of Hadoop Partitioner, the need for a Partitioner in Hadoop, and a poor case of Hadoop partitioning.
Let us understand what Hadoop Partitioner is.
What is Partitioner in Hadoop?
A Partitioner permits disseminating how outputs go from the map stage to the reducers.
Partitioner controls the key segment of the middle map outputs. The key or a subset of the key is utilized to infer the partition by a hash function.
As a matter of fact, Hadoop structure is a hash-based partitioner. This hash function in Hadoop helps derive the partition.
The partition works on the mapper output depending on the key value. The same key value goes into the same partition within each mapper. After this process, the final partition is sent to a reducer.
The class of a partition decides where the pair of a key and value will go to. The partitioning phase falls in the middle of the map and reduce phases.
Let’s see why there is a need for a Hadoop Partitioner.
What is the Need for a Hadoop Partitioner?
An input data set is taken, and a list of key and value pairs is produced in the MapReduce architecture job phase. These key and value pairs are formed in the map phase. This happened when the input data is split, which is then processed by each task and map, producing a list of key and value pairs.
However, the map out partition happens right before the reduce phase, based on key and value. This way, all keys of the same values are grouped together, and they go to the same reducer. Hence, even the distribution of the output from the map on the reducer is ensured.
Hadoop MapReduce partitioning allows for even distribution of mapper output over the reducer by ensuring the right key goes to the right reducer.
Syntax of Hash Partitioner in Hadoop
Here is the default syntax of a hash partitioner in Hadoop.
public int getPartition(K key, V value
return(key.hashCode() & Integer.MAX_VALUE) % numRedudeTasks;
Implementation of Hadoop Partitioner
To see an example of the use of Hadoop Partitioner in practical applications, let us look at the table below containing data for the residents in a block in a building.
|Flat Number||Name||Gender||Family Members||Electricity Bill|
Now let’s write a program to find the highest electricity bill by gender in different family member groups – 2 to 3 and below 4.
The given data gets saved as input.txt in the directory “/home/Hadoop/HadoopPartitioner”.
The key follows a pattern – special key + file name + line number. For example,
key = input@1
For this, value would be
value = 1101 \t Manisha \t Female \t 3 \t 1500
Here’s how the operation would go:
- Read value
- Use split function to separate genders and store them in a string variable
String str = value.toString().split(“\t”, -2);
String gender = str;
- Now send gender information and record data value as ley and value pair to the partition task
context.write(new Text(gender), new Text(value));
- Repeat for all records
As an output, you will get the sorted gender data and data value as key and value pairs.
Here’s how the partitioner task would go.
First, the partitioner will take the key and value pairs sent to it as input. Now, it will divide the data into different segments.
key = gender field value
value = record value of that gender
Here’s how the process will follow.
- Read family member value from the key value pair
String str = value.toString().split(“\t”);
int age = Integer.parseInt(str);
- Check family members value with following conditions
- Family members from 2 to 3
- Family members less than 4
else if(familymembers>=2 && familymembers<=3)
return 1 % numReduceTasks;
return 2 % numReduceTasks;
The data of key and value pairs will be segmented into the three given collections.
Also learn: Best Hadoop Tools You Should Know About
Poor Partitioning and Overcoming it
Let us assume that you can predict that one of the keys in your input data will show up more than any other key. So, you might need to send all your key (a huge number) to one partition and afterwards, distribute the remaining keys over all other partitions by their hashCode().
So, now you have two mechanisms of sending information to partitions:
- First, the key showing up more will be sent to one partition
- Second, all the remaining keys will be sent to partitions as per their hashCode().
Now, let’s say your hashCode() technique doesn’t turn out to be appropriately distributing the other keys over partitions. So, the information isn’t equally circulated in partitions and reducers. This is because each partition is proportional to a reducer.
So, certain reducers will have larger amounts of data than other reducers. Hence, the remaining reducers will have to wait for one reducer (one with user-defined keys) because of the load at hand.
In this case, you should follow a methodology that would share the data across different reducers. Learn more about Hadoop with our Hadoop ultimate tutorial.
We hope that this guide on Hadoop Partitioners was helpful to you. For more information on this subject, get in touch with the experts at upGrad, and we will help you sail through.
If you are interested to know more about Big Data, check out our PG Diploma in Software Development Specialization in Big Data program which is designed for working professionals and provides 7+ case studies & projects, covers 14 programming languages & tools, practical hands-on workshops, more than 400 hours of rigorous learning & job placement assistance with top firms.