For working professionals
For fresh graduates
More
Did you know? Hierarchical clustering analysis has high computational costs. While using a heap can reduce computation time, memory requirements are increased. Both the divisive and agglomerative types of clustering are "greedy," meaning that the algorithm decides which clusters to merge or split by making the locally optimal choice at each stage of the process.
Hierarchical clustering is an unsupervised machine learning algorithm used to group similar data points into clusters. It creates a hierarchy of clusters by either merging small clusters into larger ones (agglomerative) or splitting large clusters into smaller ones (divisive). This method is particularly useful for uncovering complex data relationships.
In this blog, we will dive into the core concepts of hierarchical clustering, explore the differences between agglomerative and divisive types, and examine the algorithms used to implement them. By the end, you’ll have a clear understanding of how hierarchical clustering can enhance your machine learning models.
Explore upGrad's online AI and ML courses to enhance your machine learning skills, master clustering techniques, and improve data analysis.
Hierarchical clustering is a machine learning technique used to group similar data points into clusters, forming a tree-like structure called a dendrogram. Unlike other clustering methods, it does not require you to specify the number of clusters in advance.
This method builds clusters step-by-step, either by merging smaller clusters or splitting larger ones, depending on the approach. In this section, we will dive deeper into how hierarchical clustering works, its types, and when to use it effectively.
Machine learning professionals with expertise in clustering techniques, including hierarchical clustering, are in high demand for their ability to analyze and organize complex data. If you're looking to master clustering methods and advance your skills in machine learning, consider these top-rated courses:
Let us examine how hierarchical clustering fits into unsupervised learning and how it differs from other clustering methods.
Hierarchical clustering excels in unsupervised learning by identifying natural groupings in data without the need for labels. It focuses on revealing hidden patterns based on data similarity, allowing you to explore the structure of your dataset without predefined categories. This section will delve into the specifics of how hierarchical clustering operates, its two key types, and practical applications to guide your analysis.
Also Read: Difference Between Supervised and Unsupervised Learning
Hierarchical clustering differs from other clustering techniques like k-means and DBSCAN in its approach to forming clusters. Unlike k-means, which requires you to define the number of clusters in advance, hierarchical clustering builds a tree-like structure of nested clusters. It also does not rely on the density of data points like DBSCAN, making it more flexible for certain types of data. In this section, we’ll compare hierarchical clustering with other popular methods to help you decide when to use each one.
Feature | K-Means Clustering | Hierarchical Clustering |
How It Works? | - Divides data into a fixed number of clusters (k). - Starts with random centroids. - Iteratively reassigns points and updates centroids. | - Builds a tree (dendrogram) through merging (agglomerative) or splitting (divisive) clusters. - No need to specify the number of clusters upfront. |
When to Use? | - Efficient for large datasets. - When the number of clusters is known. - Useful when speed is needed. | - Best for smaller or complex datasets. - When exploring cluster hierarchy. - Suitable when the number of clusters is unknown. |
Strengths | - Fast and scalable. - Works well for spherical clusters. | - Provides hierarchical insights into data. - No need to define number of clusters. |
Limitations | - Assumes clusters are spherical and evenly sized. - Sensitive to initial centroid positions. | - Computationally expensive for large datasets. - Results may be hard to scale. |
Feature | Hierarchical Clustering | DBSCAN |
How It Works? | - Groups data by merging/splitting based on similarity. - Constructs a hierarchy of clusters. | - Clusters based on density of data points. - Points in low-density regions are marked as outliers. |
When to Use? | - When exploring multi-level cluster structures. - Helpful for visualizing cluster hierarchies. | - Effective for irregular-shaped clusters. - Suitable for noisy and unevenly distributed data. |
Strengths | - Captures nested relationships among data points. | - Detects arbitrary shapes. - Handles outliers naturally. |
Limitations | - Not ideal for large datasets (high time complexity). | - Requires tuning parameters (eps, minPts). - Struggles with varying density. |
upGrad's free Unsupervised Learning: Clustering course will teach you more about clustering techniques. Explore K-Means, Hierarchical Clustering, and practical applications to uncover hidden patterns in unlabelled data.
Also Read: Clustering in Machine Learning: Learn About Different Techniques and Applications
Now that you have understood how hierarchical clustering differs from other clustering methods, let's explore hierarchical clustering algorithms and how they work.
Hierarchical clustering works by calculating the similarity or distance between data points or clusters and then progressively merging or splitting them based on this distance. The algorithm continues until all data points are grouped into one cluster or a specified number of clusters is reached.
Here's how it works:
Now, let’s break down each step in detail to understand how hierarchical clustering builds its tree structure.
In hierarchical clustering, the agglomerative and divisive approaches represent two opposite strategies for forming clusters. Agglomerative starts with individual data points and progressively merges them into larger clusters, while divisive begins with the entire dataset as one cluster and splits it into smaller ones.
This section will explore the key differences between these approaches, their advantages, and when to use each one for your analysis.
Why it matters: The approach you choose directly affects how your clusters are formed. If your data is naturally hierarchical, agglomerative might give you a smoother progression. Divisive, on the other hand, gives a clearer, top-down separation. However, the time complexity plays a role here—agglomerative clustering generally has a time complexity of O(n^3) due to the need to calculate pairwise distances at each step, making it computationally expensive for large datasets.
Divisive clustering can have a slightly better time complexity in some cases, but it’s also more resource-intensive in how it splits clusters at each stage.
Why it matters: If your dataset is large or time-sensitive, the time complexity of these methods becomes crucial. Agglomerative clustering might be the go-to for smaller datasets, but divisive clustering could be more effective for datasets where a defined split is needed early on. Understanding how time complexity affects performance helps you select the method that balances speed with accuracy.
Why it matters: In real-world scenarios, noisy data can distort your results. If your dataset is noisy or includes outliers that you don’t want affecting the clusters, divisive clustering may help better isolate these points. However, agglomerative clustering’s tendency to merge smaller groups quickly can lead to less clean segmentation unless pre-processing steps are taken.
Why it matters: The choice of approach impacts the separation and cohesion of your clusters. Divisive clustering’s top-down method creates clearer divisions, while agglomerative clustering’s gradual merging forms more natural, tightly-knit groups. Keep in mind that divisive clustering, with its distinct separations, may require more detailed calculations and take more time.
Why it matters: Depending on your business needs, you might prefer the organic growth of clusters in agglomerative clustering or the clear-cut divisions of divisive clustering. When considering time complexity, though, remember that large datasets will be computationally expensive for agglomerative clustering, which might affect real-time customer segmentation.
Why it matters: The right approach helps you create meaningful, actionable insights. Agglomerative clustering offers flexibility and nuance in clustering, but divisive clustering provides clarity and speed in separating large, diverse datasets.
In this section, you’ll see a conceptual example comparing the agglomerative and divisive approaches in hierarchical clustering. You’ll understand how agglomerative clustering builds clusters from the bottom up by merging smaller groups, while divisive clustering works from the top down, starting with one large group and splitting it into smaller ones.
This example will help you visualize the practical differences and guide you in choosing the right approach for your data.
Approach | Agglomerative Clustering | Divisive Clustering |
Initial State | Each data point starts as its own individual cluster. | All data points begin as a single large cluster. |
Building Process | Clusters are merged progressively based on similarity or distance. | The largest cluster is recursively split into smaller clusters. |
Cluster Growth | Builds the cluster tree by progressively merging closer clusters. | Builds the cluster tree by progressively dividing clusters. |
Final Outcome | Results in one final cluster containing all data points. | Results in multiple smaller clusters, each containing individual data points. |
Method Type | Bottom-up approach (merging). | Top-down approach (splitting). |
Common Use Case | Ideal for exploratory data analysis where the number of clusters is unknown. | Best used when it is known that the data naturally splits into distinct groups. |
Now that we’ve compared Agglomerative and Divisive clustering approaches, let's focus on the step-by-step process of performing Agglomerative Hierarchical Clustering. This will provide a clear, practical guide for applying this method to your data.
To perform agglomerative hierarchical clustering step by step, you start by calculating the distance between each data point. Then, you progressively merge the closest pairs into clusters, repeating this process until all points are grouped into one. This guide will walk you through each stage, providing clarity on how to apply this method effectively using available tools.
Step 1: Start with each data point as its cluster
At the beginning, every data point in the dataset is treated as its own cluster. If you have a dataset with 10 points, you will have 10 clusters to begin with.
Step 2: Compute the pairwise distances between all clusters
To determine which clusters to merge, you need to calculate the distance between all pairs of clusters. The most common distance metric used is Euclidean distance, though others like Manhattan or Cosine distance can be used depending on the problem.
Step 3: Merge the two closest clusters
Once the pairwise distances are computed, you merge the two clusters with the smallest distance, reducing the number of clusters by one.
Also Read: What is Cluster Analysis in Data Mining? Methods, Benefits, and More
Step 4: Update the distance matrix
After merging the two clusters, the next step is to update the distance matrix, which now includes the newly formed cluster. The distance between the new cluster and other clusters needs to be calculated. This is typically done using one of several linkage methods:
Step 5: Repeat until one cluster remains
The process of calculating pairwise distances, merging the closest clusters, and updating the distance matrix is repeated iteratively until all the data points are merged into a single cluster or until a desired number of clusters is reached.
Next, you would calculate the distances between these clusters, merge the closest pair, and continue until you are left with a single cluster: {A, B, C, D, E}
Summary of Agglomerative Hierarchical Clustering Steps:
Visual Representation:
To better understand these steps, look at the following:
Ready to take your data science skills to the next level? The upGrad's Data Science Master’s Degree offers a comprehensive pathway to mastering key concepts like agglomerative hierarchical clustering. Enroll today to deepen your expertise and unlock new opportunities in the field of data science!
Also Read: What is Clustering in Machine Learning and Different Types of Clustering Methods
Now that you know what agglomerative clustering is and how it differs from the divisive approach, let's examine the types of hierarchical clustering and their linkage methods.
In hierarchical clustering, you can choose between two main types: agglomerative (bottom-up) and divisive (top-down). Each type relies on linkage methods such as single, complete, average, and Ward’s method to determine how clusters are formed based on distance between data points or clusters.
Understanding these methods helps you select the most appropriate approach for your dataset. The table below outlines these types and their key characteristics.
Here’s a summary of the different distance metrics and linkage methods used in hierarchical clustering:
Aspect | Description |
Distance Calculation | Determines how distance between data points or clusters is measured. Common metrics include: |
Euclidean Distance | Measures the straight-line distance between two points in space. |
Manhattan Distance | Measures the distance between points along axes at right angles (sum of absolute differences). |
Cosine Distance | Measures the cosine of the angle between two vectors, often used in text mining and document clustering. |
Linkage Methods | Defines how distances between clusters are computed during the merging or splitting process: |
Single Linkage | Measures the distance between the closest points of two clusters. |
Complete Linkage | Measures the distance between the farthest points of two clusters. |
Average Linkage | Takes the average distance between all pairs of points in the two clusters. |
Ward's Linkage | Minimizes the variance within clusters by merging clusters that lead to the smallest increase in total variance. |
Iterative Process | The process of merging or splitting continues until a predefined stopping criterion, like the desired number of clusters, is reached. |
Now, let’s look at the different linkage methods and how each one affects the clustering process.
Linkage criteria determine how the distance between clusters is calculated during hierarchical clustering. Common methods include single, complete, and average linkage, each measuring distances differently to influence how clusters are merged. This section will explore these criteria, helping you understand their impact on the final clustering result and when to use each one.
Below are the most commonly used linkage methods.
1. Single Linkage
Single linkage (also known as nearest point linkage) calculates the distance between two clusters based on the closest pair of points between them. In essence, it uses the minimum distance between clusters to decide whether to merge them.
Single linkage is prone to producing "chained" clusters, where data points may be loosely connected and form elongated, irregular shapes. It's often described as "floppy" because a single distant outlier can cause clusters to merge unexpectedly.
2. Complete Linkage
Complete linkage (or farthest point linkage) calculates the distance between two clusters based on the furthest pair of points, one from each cluster. This results in more compact clusters than single linkage because it's less sensitive to individual outliers. Complete linkage ensures that all points within a cluster are close to each other, which helps in avoiding the formation of elongated clusters.
3. Average Linkage
Average linkage calculates the distance between two clusters by averaging the pairwise distances between all points in the first cluster and all points in the second cluster. This method strikes a middle ground between single and complete linkage and is useful when you expect clusters to be fairly compact, but with some variation in their shape.
4. Ward's Method
Ward's method minimizes the within-cluster variance by merging clusters that result in the smallest increase in the total variance of the data. It is often considered the most efficient linkage method when you expect spherical clusters because it leads to balanced and tight clusters. Unlike the other methods, Ward's method is based on minimizing the variance rather than simply measuring distances.
The algorithm calculates the total sum of squared differences (variance) within each cluster and merges the two clusters that result in the least increase in this variance.
Euclidean Distance Formula:
Ward's method typically relies on the Euclidean distance to measure the similarity between clusters. Euclidean distance is the most common way to calculate distance in continuous space, and it's used to determine how "close" or "distant" data points or clusters are.
Where x_i and y_i are the coordinates of the points in an n-dimensional space.
Ward's method uses this distance to calculate the initial "distance" between clusters, and as clusters are merged, the centroid of the new cluster is recalculated.
Choosing the right linkage criteria depends on the structure of your data and the type of clusters you want to form. For instance, single linkage works well with elongated clusters, while complete linkage is better for compact, well-separated clusters. This section will guide you in selecting the most appropriate linkage method based on the characteristics of your dataset and clustering goals.
Below is a guide to help you select the appropriate linkage based on your data's structure:
Linkage Method | When to Use | Example Use Case |
Single Linkage | Ideal for data that forms long, chain-like clusters or non-spherical shapes. | Gene expression data, where a biological pathway may link different genes. |
Complete Linkage | Best for compact, spherical clusters where data points within clusters are tightly grouped. | Customer segmentation, where you want distinct, well-separated customer groups. |
Average Linkage | Suitable for general-purpose clustering when data doesn’t fit into compact or chain-like structures. | Document clustering, grouping texts based on topic similarity. |
Ward's Method | Ideal for data that forms tight, spherical clusters with minimal internal variance. | Image segmentation or market research, where you expect clearly defined, compact groups. |
Are you a full-stack developer wanting to integrate AI into your Python programming workflow? upGrad's AI-Driven Full-Stack Development bootcamp can help you. You'll learn how to build AI-powered software using OpenAI, GitHub Copilot, Bolt AI & more.
Also Read: Hierarchical Clustering in Python [Concepts and Analysis]
Now that we've covered the types of hierarchical clustering and linkage methods, let's explore the tools and libraries to implement them in your machine learning projects.
To apply hierarchical clustering in machine learning, you can use libraries like Scikit-learn, SciPy, and Seaborn, each offering specific functions for building, visualizing, and analyzing clusters.
These tools support various linkage methods and distance metrics, making them adaptable to different use cases. The list below highlights the key features and use cases of each library.
Using Scikit-Learn and SciPy allows you to efficiently implement hierarchical clustering with just a few lines of code. Scikit-Learn provides a user-friendly interface for clustering tasks, while SciPy offers advanced linkage methods and distance calculations. This section will walk you through how to use these libraries to perform hierarchical clustering, making the process straightforward and accessible.
Sample Code Using Agglomerative Clustering and Linkage + Dendrogram
Here's a practical example of how to perform hierarchical clustering using Scikit-learn for the clustering and SciPy for visualizing the results with a dendrogram.
# Import necessary libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import AgglomerativeClustering
From scipy.cluster.hierarchy import dendrogram, linkage
# Sample data
X = np.array([[1, 2], [3, 3], [6, 5], [8, 8], [1, 1], [7, 6]])
# Agglomerative Clustering using Scikit-learn
agg_clust = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='ward')
agg_clust.fit(X)
# Plotting the Dendrogram using SciPy
Z = linkage(X, 'ward')
plt.figure(figsize=(10, 7))
dendrogram(Z)
plt.title("Dendrogram")
plt.xlabel("Data Points")
plt.ylabel("Euclidean Distance")
plt.show()
Dendrograms are a powerful way to visualize the results of hierarchical clustering, showing how clusters are merged at each step. By plotting the hierarchical structure, you can easily determine the optimal number of clusters and understand the relationships between them. This section will guide you through the process of creating and interpreting dendrograms, helping you gain deeper insights into your clustering results.
Steps to Plot Dendrograms:
Example of Interpreting a Dendrogram
Consider the following example where we visualize the hierarchical clustering results for a set of data points. By analyzing the dendrogram, we can determine the optimal number of clusters by identifying where the large vertical distances occur. If there is a large vertical gap between two clusters, it suggests a significant difference between them.
Role of Dendrograms in Agglomerative Hierarchical Clustering
Dendrograms play a keyl role in agglomerative hierarchical clustering by visually representing how clusters are merged at each step. They help you trace the progression of clustering and identify the most suitable point to cut the tree for optimal cluster formation. In this section, you’ll learn how to interpret dendrograms and use them to refine your clustering decisions.
How Dendrograms Help Determine the Optimal Number of Clusters?
Reading Vertical and Horizontal Relationships
Using SciPy's dendrogram for Plotting
The scipy.cluster.hierarchy.dendrogram function allows you to plot the dendrogram easily. By passing the linkage matrix from the linkage function, you can generate a plot that shows the merging sequence of your data points.
Visualizing and Interpreting Cluster Hierarchies
Visualizing and interpreting cluster hierarchies helps you understand the relationships between data points at different levels of clustering. By examining dendrograms or other visual tools, you can easily identify natural groupings and determine the optimal number of clusters. This section will show you how to effectively analyze cluster hierarchies, enabling you to make data-driven decisions on cluster selection.
Example:
# Sample dataset
X = np.array([[2, 4], [1, 1], [6, 8], [7, 5], [5, 4]])
# Compute linkage matrix
Z = linkage(X, 'ward')
# Plot a dendrogram
plt.figure(figsize=(10, 7))
dendrogram(Z)
plt.title("Dendrogram Example")
plt.xlabel("Data Points")
plt.ylabel("Linkage Distance")
plt.show()
Expected Output:
Evaluating Hierarchical Clustering Output
Evaluating hierarchical clustering output allows you to assess the effectiveness of the clusters formed by your algorithm. You’ll look at metrics like silhouette score, cluster cohesion, and separation to determine how well the data points are grouped. In this section, we will dive into specific evaluation techniques, providing you with the tools to validate your clustering results.
Internal Validation Metrics
Challenges in Evaluation
A major challenge in evaluating hierarchical clustering is that there is no ground truth in unsupervised learning. Since hierarchical clustering doesn't have predefined labels, you cannot directly compare the clusters against any correct answer.
Decoding what drives customer action can be easy with upGrad's free Introduction to Consumer Behavior course. You will explore the psychology behind purchase decisions, discover proven behavior models, and learn how leading brands influence buying habits.
Also Read: Understanding the Concept of Hierarchical Clustering in Data Analysis: Functions, Types & Steps
Now that we have understood the Tools and Libraries for Implementing Hierarchical Clustering in ML, let's explore hierarchical clustering in data mining applications.
You can apply hierarchical clustering in data mining to uncover hidden patterns in areas like customer segmentation, document classification, and gene expression analysis. Its ability to build nested clusters makes it suitable for exploring complex, multi-level data relationships. The examples below illustrate how it's used across different domains.
Let's explore some key applications of hierarchical clustering in real-world scenarios.
Customer segmentation is the process of dividing your customer base into distinct groups based on shared characteristics such as behavior, demographics, or preferences. This allows you to tailor marketing strategies and improve targeting. In this section, we’ll explore how hierarchical clustering can be used for effective customer segmentation, providing practical insights for applying this method.
Document or text clustering is the process of grouping similar documents or texts based on content, helping to identify themes or topics within large datasets. This method is useful for organizing, summarizing, and retrieving relevant information. In this section, we will show you how hierarchical clustering can be applied to text data, making it easier to analyze and categorize documents effectively.
Bioinformatics and gene expression analysis involves using clustering techniques to analyze gene data and uncover patterns in gene expression across different conditions or samples. This helps in identifying relationships between genes and understanding biological processes. In this section, we’ll explore how hierarchical clustering is applied in bioinformatics to analyze gene expression data, offering insights into its practical use for genetic research.
Now let's check your understanding with a short quiz. Please complete the quiz carefully before looking at the answers.
This quiz lets you check your understanding of key concepts in hierarchical clustering, including types, linkage methods, and real-world applications. You’ll answer multiple-choice questions that reinforce what you’ve learned. Use the quiz below to evaluate your grasp of the material.
Test your knowledge now!
1. What does hierarchical clustering aim to achieve in machine learning?
2. Which of the following is a key characteristic of hierarchical clustering?
3. Which linkage method calculates the distance based on the closest pair of points from two clusters?
4. What is the main drawback of single-linkage clustering?
5. In which situation is complete linkage clustering most effective?
6. What does Ward's method minimize when merging clusters?
7. Which of the following is the correct representation of a dendrogram?
8. How can the optimal number of clusters be determined using a dendrogram?
9. What is a common application of hierarchical clustering in customer segmentation?
10. In bioinformatics, hierarchical clustering is often used to group genes based on their:
Answers:
B) Group similar data points into clusters
By now, you’ve learned how hierarchical clustering works, the types and linkage methods involved, and where it's applied in real-world scenarios. Use this knowledge to evaluate clustering problems more confidently and choose the right technique based on your data structure and analysis goals. Applying these insights can significantly improve your approach to unsupervised learning tasks.
If you’re looking to deepen your expertise but feel unsure about the next step or how to bridge skill gaps, upGrad offers structured, mentor-guided programs tailored to real industry demands. These courses not only build your technical foundation but also help you stay aligned with evolving machine learning trends for long-term career growth.
While the course covered in the article can significantly improve your knowledge, here are some additional free courses from upGrad to facilitate your continued learning:
You can also get personalized career counseling with upGrad to guide your career path, or visit your nearest upGrad center and start hands-on training today!
Agglomerative clustering starts with each data point as its own cluster and progressively merges them based on similarity, while divisive clustering starts with all data points in one cluster and progressively splits them into smaller clusters. Agglomerative is a bottom-up approach, and divisive is a top-down approach.
Hierarchical clustering is particularly useful in unsupervised learning because it doesn’t require labeled data. It works by grouping similar data points together based on their inherent characteristics, which is ideal when there are no predefined categories.
Time complexity is a key factor because agglomerative clustering can become computationally expensive with large datasets, as it requires calculating pairwise distances repeatedly, leading to O(n^3) complexity. Divisive clustering can sometimes be more efficient in handling large, high-dimensional datasets but may still be resource-intensive depending on the split strategy.
Use complete linkage when you need compact and well-separated clusters. It ensures that the clusters being merged are tightly bound, making it less sensitive to outliers. On the other hand, single linkage can result in elongated or “chained” clusters, which are not ideal for tight groupings.
Hierarchical clustering helps in grouping customers based on purchasing behaviors, demographics, or engagement. It enables businesses to identify customer segments such as frequent buyers or seasonal shoppers, which can then be targeted with specific marketing strategies or loyalty programs.
A dendrogram helps determine the optimal number of clusters by cutting the tree at a specific height. The larger the vertical distance between merges, the more distinct the clusters are. By selecting the point where these large vertical gaps appear, you can identify an appropriate number of clusters.
Ward’s method minimizes the within-cluster variance, making it ideal for creating compact, spherical clusters with minimal internal variability. It is particularly effective when clusters are well-separated and you want to ensure that the clustering result is balanced and tight.
While DBSCAN focuses on grouping data based on density and can handle arbitrary shapes, hierarchical clustering focuses on forming clusters based on similarity and produces a tree-like structure (dendrogram). Hierarchical clustering is better suited when you need to explore multi-level structures, while DBSCAN is ideal for identifying clusters in noisy, unevenly distributed data.
Hierarchical clustering can be computationally expensive, especially for large datasets, as it requires calculating pairwise distances between every data point. For very large datasets, alternative clustering methods like k-means or DBSCAN might be more practical, but if you’re working with smaller or moderately sized datasets, hierarchical clustering can be quite effective.
Common distance metrics include Euclidean distance, which calculates straight-line distance between data points, Manhattan distance, which sums the absolute differences of coordinates, and cosine distance, which measures the angle between two vectors, especially used in text clustering. The choice of distance metric depends on the type of data being analyzed.
The best way to visualize hierarchical clustering results is by using a dendrogram, a tree-like diagram that shows how clusters are merged or split. It helps in understanding the relationships between clusters and determining the optimal number of clusters by cutting the dendrogram at a meaningful point. Tools like SciPy’s dendrogram function can assist in creating these visualizations.
Start Learning For Free
Explore Our Free Software Tutorials and Elevate your Career.
Talk to our experts. We are available 7 days a week, 9 AM to 12 AM (midnight)
Indian Nationals
1800 210 2020
Foreign Nationals
+918068792934
1.The above statistics depend on various factors and individual results may vary. Past performance is no guarantee of future results.
2.The student assumes full responsibility for all expenses associated with visas, travel, & related costs. upGrad does not provide any a.