Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconUnderstanding the Concept of Hierarchical Clustering in Data Analysis: Functions, Types & Steps

Understanding the Concept of Hierarchical Clustering in Data Analysis: Functions, Types & Steps

Last updated:
8th Apr, 2023
Views
Read Time
7 Mins
share image icon
In this article
Chevron in toc
View All
Understanding the Concept of Hierarchical Clustering in Data Analysis: Functions, Types & Steps

Clustering refers to the grouping of similar data in groups or clusters in data analysis. These clusters help data analysts organise similar data points into one group while also differentiating them from other data that are not similar. 

Hierarchical clustering of data is one of the methods used to group data into a tree of clusters. It is one of the most popular and useful approaches to data grouping. If you want to be a part of the growing field of data science and data analysis, hierarchical clustering is one of the most important things to learn.

This article will help you understand the nature of hierarchical clustering, its function, types and advantages. 

What is Hierarchical Clustering?

As the name suggests, hierarchical clustering groups different data into clusters in a hierarchical or tree format. Every data point is treated as a separate cluster in this method. Hierarchical cluster analysis is very popular amongst data scientists and data analysts as it summarises the data into a manageable hierarchy of clusters that is easier to analyse. 

The hierarchical clustering algorithms take multiple different data points and take the closest of the two to make a cluster. It repeats these steps until all the data points turn into one cluster. The process can also be inverted to divide one single merged cluster into different smaller clusters and ultimately into data points. 

The hierarchical method of clustering can be visually represented as a dendrogram which is a tree-like diagram. A dendrogram can be cut off at any point during the clustering process when the desired number of clusters has been made. This also makes the process of analysing the data easier. 

How does Hierarchical Clustering work?

The process of hierarchical clustering is quite simple to understand. A hierarchical clustering algorithm treats all available data sets as different clusters. Then, it identifies two data sets that are the most similar and merges them into a cluster. After that, the system keeps repeating these steps until all the data points merge into one large cluster. The process can also be stopped once the required number of clusters is available for analysis. 

The progress and output of a hierarchical clustering process can be visualised as a dendrogram that can help you identify the relationship between different clusters and how similar or different they are in nature. 

Types of Hierarchical Clustering

A hierarchical clustering algorithm can be used in two different ways. Here are the characteristics of two types of hierarchical clustering that you can use. 

1. Agglomerative Hierarchical Clustering 

The agglomerative method is the more popularly used way of hierarchically clustering data. In this method, the algorithm is presented with multiple different data sets, each of which is treated as a cluster of its own. Then the algorithm starts combining into clusters of twos based on how similar they are to each other. It repeats these steps until the required number of clusters is reached. This method is more popularly used in hierarchical cluster analysis

2. Divisive Hierarchical Clustering 

The divisive method of hierarchical clustering is the reverse of the agglomerative method. In this method, the algorithm is presented with a single large cluster of numerous data points which it differentiates step by step based on their disparity. This results in multiple data sets that have different properties. The divisive method is not used often in practice. 

Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Steps in Hierarchical Clustering

As mentioned before, there are three main steps in the hierarchical clustering of data. 

  1. The identification of similarities between two different data points. 
  2. Merging them into one cluster. 
  3. Repeating these steps for all data points until they are merged into one large cluster of data. 

However, it is also very important to remember how to identify similar points in hierarchical clustering. If you study a dendrogram produced by an algorithm, you can easily identify the central points of each different cluster. The clusters that have the least distance from each other in the dendrogram are the most similar. This is the reason why it’s also referred to as a distance-based algorithm. The similarity between one cluster and all the other ones in a dendrogram is called a proximity matrix. 

You also have to choose the correct distance measure while using hierarchical clustering. For example, based on whether you choose your distance measure to be their gender or educational background, a data set involving information about the same people will produce different dendrograms. 

Read our popular Data Science Articles

Hierarchical Clustering Python

Now that you have a clear understanding of hierarchical clustering, let us look at how to perform hierarchical clustering Python. Here is what performing hierarchical clustering would look like using Python’s ‘scikit-learn’ library. 

Let us suppose that there are two variables (x and y) in a dataset with six observations: 

Observations xy
111
221
343
454
565
675

As a scatter plot, this is how these observations will be visualised: 

Python 

import numpy as

np

import matplotlib.pyplot as plt

# Define the dataset

X = np.array([[1, 1], [2, 1], [4, 3], [5, 4], [6, 5], [7, 5]])

# Plot the data

plt.scatter(X[:,0], X[:,1]) 

plt.show()

There are two clusters of observations in this plot- one includes lower values of x and y, and the other with higher values of x and y

You can use ‘scikit learn’ to perform hierarchical clustering on this dataset.

The two clusters of observations in the plot have different values. One consists of higher values of x and y, and the other with lower. 

Check out our free data science courses to get an edge over the competition.

Out of the two main methods of hierarchical clustering that we have discussed before, we will use the agglomerative clustering method with the ‘ward’ linkage method. The ‘ward’ method minimises the variations of the clusters which are being merged together, therefore producing clusters which are similar in size and shape. 

Explore our Popular Data Science Courses

Python 

from sklearn.cluster import AgglomerativeClustering

# Perform hierarchical clustering

clustering AgglomerativeClustering (n_clusters=2, linkage=’ward‘).fit(X)

The ‘n-clusters’ parameter was used here to specify that we want two clusters. 

We can use different colours for each cluster when we plot them:

Python

# Plot the clusters

colors= np.array([‘r‘, ‘b‘]) 

plt.scatter (X[:,0], X[:,1], c=colors [clustering.labels_]) 

plt.show()

The two clusters in the data have been correctly identified by the clustering algorithm. You can also use what label the clustering algorithm has assigned to each observation: 

Python

print(clustering.labels_)

csharp

[0 0 1 1 1 1]

The last four observations were assigned to cluster 1, while the first two were assigned to cluster 0. 

If you want to visualise the hierarchical structure of these clusters, you can generate a dendrogram to do so: 

Python

from scipy.cluster.hierarchy import dendrogram, linkage

# Compute the linkage matrix

Z = linkage(X, ‘ward‘)

# Plot the dendrogram

dendrogram(Z)

plt.show()

The dendrogram can help us visualise the hierarchy of merged clusters. 

Top Data Science Skills to Learn

Conclusion 

Data clustering is a very important part of data science and data analysis. If you want to learn different clustering methods, then upGrad can help you kickstart your learning journey! With the aid of master classes, industry sessions, mentorship sessions, Python Programming Bootcamp, and live learning sessions, upGrad’s Master of Science in Data Science is a course designed for professionals to gain an edge over competitors. 

Offered under the guidance of the University of Arizona, this course boosts your data science career with a cutting-edge curriculum, immersive learning experience with industry experts and job opportunities.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1Q. Why do we do hierarchical clustering in data science?

Hierarchical clustering is used to group data based on various similar attributes. Distributing data aspects in visually comprehensible groups simplifies its practical implementation by easily looking at the dendrogram.

2Q. What is hierarchical clustering used in?

Hierarchical clustering is a widely used form of grouping data generated through social networking sites. Using this data, analysts can reap valuable insights relevant to enhance their business processes and generative improve revenue.

3Q. What are the limitations of hierarchical clustering?

Hierarchical clustering does not suit mixed types or missing data. Another limitation of hierarchical clustering is it does not perform well with an extensively large set of data.

Explore Free Courses

Suggested Blogs

Top 13 Highest Paying Data Science Jobs in India [A Complete Report]
905207
In this article, you will learn about Top 13 Highest Paying Data Science Jobs in India. Take a glimpse below. Data Analyst Data Scientist Machine
Read More

by Rohit Sharma

12 Apr 2024

Most Common PySpark Interview Questions & Answers [For Freshers & Experienced]
20900
Attending a PySpark interview and wondering what are all the questions and discussions you will go through? Before attending a PySpark interview, it’s
Read More

by Rohit Sharma

05 Mar 2024

Data Science for Beginners: A Comprehensive Guide
5065
Data science is an important part of many industries today. Having worked as a data scientist for several years, I have witnessed the massive amounts
Read More

by Harish K

28 Feb 2024

6 Best Data Science Institutes in 2024 (Detailed Guide)
5167
Data science training is one of the most hyped skills in today’s world. Based on my experience as a data scientist, it’s evident that we are in
Read More

by Harish K

28 Feb 2024

Data Science Course Fees: The Roadmap to Your Analytics Career
5075
A data science course syllabus covers several basic and advanced concepts of statistics, data analytics, machine learning, and programming languages.
Read More

by Harish K

28 Feb 2024

Inheritance in Python | Python Inheritance [With Example]
17625
Python is one of the most popular programming languages. Despite a transition full of ups and downs from the Python 2 version to Python 3, the Object-
Read More

by Rohan Vats

27 Feb 2024

Data Mining Architecture: Components, Types & Techniques
10797
Introduction Data mining is the process in which information that was previously unknown, which could be potentially very useful, is extracted from a
Read More

by Rohit Sharma

27 Feb 2024

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
80726
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

19 Feb 2024

Sorting in Data Structure: Categories & Types [With Examples]
139083
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search through it quickly and e
Read More

by Rohit Sharma

19 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon