Understanding the Concept of Hierarchical Clustering in Data Analysis: Functions, Types & Steps

# Understanding the Concept of Hierarchical Clustering in Data Analysis: Functions, Types & Steps

Last updated:
8th Apr, 2023
Views
7 Mins
View All

Clustering refers to the grouping of similar data in groups or clusters in data analysis. These clusters help data analysts organise similar data points into one group while also differentiating them from other data that are not similar.

Hierarchical clustering of data is one of the methods used to group data into a tree of clusters. It is one of the most popular and useful approaches to data grouping. If you want to be a part of the growing field of data science and data analysis, hierarchical clustering is one of the most important things to learn.

## What is Hierarchical Clustering?

As the name suggests, hierarchical clustering groups different data into clusters in a hierarchical or tree format. Every data point is treated as a separate cluster in this method. Hierarchical cluster analysis is very popular amongst data scientists and data analysts as it summarises the data into a manageable hierarchy of clusters that is easier to analyse.

The hierarchical clustering algorithms take multiple different data points and take the closest of the two to make a cluster. It repeats these steps until all the data points turn into one cluster. The process can also be inverted to divide one single merged cluster into different smaller clusters and ultimately into data points.

The hierarchical method of clustering can be visually represented as a dendrogram which is a tree-like diagram. A dendrogram can be cut off at any point during the clustering process when the desired number of clusters has been made. This also makes the process of analysing the data easier.

## How does Hierarchical Clustering work?

The process of hierarchical clustering is quite simple to understand. A hierarchical clustering algorithm treats all available data sets as different clusters. Then, it identifies two data sets that are the most similar and merges them into a cluster. After that, the system keeps repeating these steps until all the data points merge into one large cluster. The process can also be stopped once the required number of clusters is available for analysis.

The progress and output of a hierarchical clustering process can be visualised as a dendrogram that can help you identify the relationship between different clusters and how similar or different they are in nature.

## Types of Hierarchical Clustering

A hierarchical clustering algorithm can be used in two different ways. Here are the characteristics of two types of hierarchical clustering that you can use.

### 1. Agglomerative Hierarchical Clustering

The agglomerative method is the more popularly used way of hierarchically clustering data. In this method, the algorithm is presented with multiple different data sets, each of which is treated as a cluster of its own. Then the algorithm starts combining into clusters of twos based on how similar they are to each other. It repeats these steps until the required number of clusters is reached. This method is more popularly used in hierarchical cluster analysis

### 2. Divisive Hierarchical Clustering

The divisive method of hierarchical clustering is the reverse of the agglomerative method. In this method, the algorithm is presented with a single large cluster of numerous data points which it differentiates step by step based on their disparity. This results in multiple data sets that have different properties. The divisive method is not used often in practice.

Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

## Steps in Hierarchical Clustering

As mentioned before, there are three main steps in the hierarchical clustering of data.

1. The identification of similarities between two different data points.
2. Merging them into one cluster.
3. Repeating these steps for all data points until they are merged into one large cluster of data.

However, it is also very important to remember how to identify similar points in hierarchical clustering. If you study a dendrogram produced by an algorithm, you can easily identify the central points of each different cluster. The clusters that have the least distance from each other in the dendrogram are the most similar. This is the reason why it’s also referred to as a distance-based algorithm. The similarity between one cluster and all the other ones in a dendrogram is called a proximity matrix.

You also have to choose the correct distance measure while using hierarchical clustering. For example, based on whether you choose your distance measure to be their gender or educational background, a data set involving information about the same people will produce different dendrograms.

## Read our popular Data Science Articles

 Data Science Career Path: A Comprehensive Career Guide Data Science Career Growth: The Future of Work is here Why is Data Science Important? 8 Ways Data Science Brings Value to the Business Relevance of Data Science for Managers The Ultimate Data Science Cheat Sheet Every Data Scientists Should Have Top 6 Reasons Why You Should Become a Data Scientist A Day in the Life of Data Scientist: What do they do? Myth Busted: Data Science doesn’t need Coding Business Intelligence vs Data Science: What are the differences?

## Hierarchical Clustering Python

Now that you have a clear understanding of hierarchical clustering, let us look at how to perform hierarchical clustering Python. Here is what performing hierarchical clustering would look like using Python’s ‘scikit-learn’ library.

Let us suppose that there are two variables (x and y) in a dataset with six observations:

 Observations x y 1 1 1 2 2 1 3 4 3 4 5 4 5 6 5 6 7 5

As a scatter plot, this is how these observations will be visualised:

Python

import numpy as

np

import matplotlib.pyplot as plt

# Define the dataset

X = np.array([[1, 1], [2, 1], [4, 3], [5, 4], [6, 5], [7, 5]])

# Plot the data

plt.scatter(X[:,0], X[:,1])

plt.show()

There are two clusters of observations in this plot- one includes lower values of x and y, and the other with higher values of x and y

You can use ‘scikit learn’ to perform hierarchical clustering on this dataset.

The two clusters of observations in the plot have different values. One consists of higher values of x and y, and the other with lower.

Check out our free data science courses to get an edge over the competition.

Out of the two main methods of hierarchical clustering that we have discussed before, we will use the agglomerative clustering method with the ‘ward’ linkage method. The ‘ward’ method minimises the variations of the clusters which are being merged together, therefore producing clusters which are similar in size and shape.

## Explore our Popular Data Science Courses

 Executive Post Graduate Programme in Data Science from IIITB Professional Certificate Program in Data Science for Business Decision Making Master of Science in Data Science from University of Arizona Advanced Certificate Programme in Data Science from IIITB Professional Certificate Program in Data Science and Business Analytics from University of Maryland Data Science Courses

Python

from sklearn.cluster import AgglomerativeClustering

# Perform hierarchical clustering

The ‘n-clusters’ parameter was used here to specify that we want two clusters.

We can use different colours for each cluster when we plot them:

Python

# Plot the clusters

colors= np.array([‘r‘, ‘b‘])

plt.scatter (X[:,0], X[:,1], c=colors [clustering.labels_])

plt.show()

The two clusters in the data have been correctly identified by the clustering algorithm. You can also use what label the clustering algorithm has assigned to each observation:

Python

print(clustering.labels_)

csharp

[0 0 1 1 1 1]

The last four observations were assigned to cluster 1, while the first two were assigned to cluster 0.

If you want to visualise the hierarchical structure of these clusters, you can generate a dendrogram to do so:

Python

# Plot the dendrogram

dendrogram(Z)

plt.show()

The dendrogram can help us visualise the hierarchy of merged clusters.

## Top Data Science Skills to Learn

 Top Data Science Skills to Learn 1 Data Analysis Course Inferential Statistics Courses 2 Hypothesis Testing Programs Logistic Regression Courses 3 Linear Regression Courses Linear Algebra for Analysis

## Conclusion

Data clustering is a very important part of data science and data analysis. If you want to learn different clustering methods, then upGrad can help you kickstart your learning journey! With the aid of master classes, industry sessions, mentorship sessions, Python Programming Bootcamp, and live learning sessions, upGrad’s Master of Science in Data Science is a course designed for professionals to gain an edge over competitors.

Offered under the guidance of the University of Arizona, this course boosts your data science career with a cutting-edge curriculum, immersive learning experience with industry experts and job opportunities.

#### Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.
Get Free Consultation

Select Course
Select
By clicking 'Submit' you Agree to

#### Data Science Skills to Master

1Q. Why do we do hierarchical clustering in data science?

Hierarchical clustering is used to group data based on various similar attributes. Distributing data aspects in visually comprehensible groups simplifies its practical implementation by easily looking at the dendrogram.

2Q. What is hierarchical clustering used in?

Hierarchical clustering is a widely used form of grouping data generated through social networking sites. Using this data, analysts can reap valuable insights relevant to enhance their business processes and generative improve revenue.

3Q. What are the limitations of hierarchical clustering?

Hierarchical clustering does not suit mixed types or missing data. Another limitation of hierarchical clustering is it does not perform well with an extensively large set of data.

## Suggested Blogs

20774
Attending a PySpark interview and wondering what are all the questions and discussions you will go through? Before attending a PySpark interview, it’s

05 Mar 2024

5056
Data science is an important part of many industries today. Having worked as a data scientist for several years, I have witnessed the massive amounts

by Harish K

28 Feb 2024

5145
Data science training is one of the most hyped skills in today’s world. Based on my experience as a data scientist, it’s evident that we are in

by Harish K

28 Feb 2024

5071
A data science course syllabus covers several basic and advanced concepts of statistics, data analytics, machine learning, and programming languages.

by Harish K

28 Feb 2024

17538
Python is one of the most popular programming languages. Despite a transition full of ups and downs from the Python 2 version to Python 3, the Object-

27 Feb 2024

10738
Introduction Data mining is the process in which information that was previously unknown, which could be potentially very useful, is extracted from a

27 Feb 2024

80461
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes

19 Feb 2024

138833
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search through it quickly and e