Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconK Means Clustering in R: Step by Step Tutorial with Example

K Means Clustering in R: Step by Step Tutorial with Example

Last updated:
17th Feb, 2020
Views
Read Time
10 Mins
share image icon
In this article
Chevron in toc
View All
K Means Clustering in R: Step by Step Tutorial with Example

As a data scientist, you’ll be doing a lot of clustering. There are many types of clustering algorithms available, and you should be well-versed in using all of them. In this article, we’ll discuss a popular clustering algorithm, K-means, and see how it’s used in R. 

You’ll find out the basic theory behind K-means clustering in R and how it’s used. We’ve also discussed a practical example later in the article. Be sure to bookmark this article for future reference. Read more about clustering analysis in Data Mining.

Before we begin discussing K means clustering in R, we should take a look at the types of clustering algorithms that are present so you can better understand how this algorithm deals with them. 

Read: Top R Libraries in Data Science

Types of Clustering

When you group several objects in such a way that the objects which are the most similar to each other are in a close cluster, it’s called clustering. The distance between the objects could be relevant to their similarity. Similarity shows the strength of the relationship between two distinct objects in data science. Clustering is a popular data mining technique. Clustering finds its applications in numerous industries and areas, including image analysis, machine learning, data compression, pattern recognition, and many others.

Clustering is of two types – Hard and Soft. Let’s discuss each of them briefly.

  • In a hard cluster, a data point would belong to a cluster totally, or it wouldn’t belong to it at all. There’s no in-between. 
  • In a soft cluster, a data object could be related to more than one cluster at once due to some likelihood or probability. 

Learn data science course online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Types of Clustering Algorithms

Like there are different types of clusters, there are different types of clustering algorithms too. You can differentiate algorithms based on their cluster model. This means you can distinguish them based on how they form clusters. If we’d start talking about all kinds of clustering algorithms, this guide will become too long and far from the main point. So, we’ll only discuss a few prominent types of clustering algorithms. There is connectivity- based, centroid based, density-based, and distribution based clustering algorithms. 

Basic Concept of K-Means

The basic concept of K-means is quite simple. K-means is related to defining the clusters so that the total within-cluster variation is as minimum as possible. There are a variety of k-means algorithms. The most common k-means algorithm is the Hartigan-Wong algorithm, which states that the total intra-cluster variation is equal to the sum of the squared distances Euclidean distances between centroids and their items:

W(Ck)=xiCk(xik)2

Here xi refers to a data point that belongs to the cluster Ck and k refers to the mean value of the data points present in the cluster Ck.

The value of xi should be such that the sum of the squared distance between xi and k is the minimum. 

Explore our Popular Data Science Certifications

What is the K-means Algorithm?

To use the algorithm, we’ll first have to state the number of clusters, K, that will be present in our result. The algorithm first selects K objects randomly to act as initial cluster centers. We call those objects cluster centroids or means. Then we assign the remaining objects to their closest centroids. The Euclidean distance between the cluster centroids and the objects determines how close they are.

After we have assigned the objects to their respective centroids, the algorithm calculates the mean value of the clusters. After this re-computation, we recheck the observations to see if they might be closer to a different cluster. Then, we reassign the objects to centroids accordingly. We keep repeating these steps until assigning clusters stops. This means we stop repeating the iterations when the clusters formed in an iteration are the same as the ones in their previous iteration. 

Our learners also read: Learn Python Online for Free

Read our popular Data Science Articles

Using K-Means Clustering (Example)

Now that you know what is the K-means algorithm in R and how it works let’s discuss an example for better clarification. In this example, we’ll cluster the customers of an organization by using the database of wholesale customers. The data for this problem is available at the machine learning repository of Berkley UCI. You can check it out here.

First, we’ll read the data. And then get a summary of it. After reading the data and seeing its summary, you’ll see that there are some stark differences between the top consumers in different categories. You’ll find some outliers, which you can’t remove easily with normalization (or scaling). With this data, a business would want to see what their mid-range of customers buy most of the time. That’s because a company would have a decent idea of what their top customers buy. 

To create a cluster of the mid-level customers, we should first get rid of the top layer of customers from each category. So we’ll remove the top 5 ones and create a new set. Here’s how we’ll do so:

Top Data Science Skills to Learn

top.n.custs <- function (data,cols,n=5) { #Requires some data frame and the top N to remove

idx.to.remove <-integer(0) #Initialize a vector to hold customers being removed

for (c in cols){ # For every column in the data we passed to this function

col.order <-order(data[,c],decreasing=T) #Sort column “c” in descending order (bigger on top)

#Order returns the sorted index (e.g. row 15, 3, 7, 1, …) rather than the actual values sorted.

idx <-head(col.order, n) #Take the first n of the sorted column C to

idx.to.remove <-union(idx.to.remove,idx) #Combine and de-duplicate the row ids that need to be removed

}

return(idx.to.remove) #Return the indexes of customers to be removed

}

top.custs <-top.n.custs(data,cols=3:8,n=5)

length(top.custs) #How Many Customers are needed to be Removed?

data[top.custs,] #Examine the available customers

data.rm.top<-data[-c(top.custs),] #Remove the required Customers

 

With this new file, we can start working on our cluster analysis. To perform the cluster analysis, we’ll use the following code:

 

set.seed(76964057) #Set the seed for reproducibility

k <-kmeans(data.rm.top[,-c(1,2)], centers=5) #Create 5 clusters, Remove columns 1 and 2

k$centers #Display&nbsp;cluster centers

table(k$cluster) #Give the count of data points in each cluster

 

When you have run this code on the given database, you’ll get these results:

  • The first cluster would have high-quality detergents but the low quantity of fresh food products
  • The third cluster would have more fresh product

You’ll need to use withinss and betweenss for a detailed interpretation of the results. k$withinss is equal to the sum of the distance’s square between each data object from the center of the cluster. The lower the range, the better would be the result. If the withinss measure is high in your data, it means there are many outliers present, and you need to perform data cleaning. k$betweenss is the sum of the distance’s square between different centers of the clusters. The distance between the cluster centers should be as high as possible. 

Read: 6 More commonly used data structures in R

You should take help of trial and error to get the most accurate results. To do so, you’ll need to try out various values for K. When the graph of your results doesn’t show increment in the withinss of your clusters, that point would be the most suitable value for K. You can find the value of K through the following code:

 

rng<-2:20 #K from 2 to 20

tries <-100 #Run the K Means algorithm 100 times

avg.totw.ss <-integer(length(rng)) #Set up an empty vector to hold all of points

for(v in rng){ # For each value of the range variable

 v.totw.ss <-integer(tries) #Set up an empty vector to hold the 100 tries

 for(i in 1:tries){

 k.temp <-kmeans(data.rm.top,centers=v) #Run kmeans

 v.totw.ss[i] <-k.temp$tot.withinss#Store the total withinss

 }

 avg.totw.ss[v-1] <-mean(v.totw.ss) #Average the 100 total withinss

}

plot(rng,avg.totw.ss,type=”b”, main=”Total Within SS by Various K”,

 ylab=”Average Total Within Sum of Squares”,

 xlab=”Value of K”)

 

That’s it. Now you can use the graph you get from this code to get the best value for K and use it to get the required results. Use this example to try out your knowledge of K-means clustering in R. Here is all the code we’ve used in the example:

 

data <-read.csv(“Wholesale customers data.csv”,header=T)

summary(data)

top.n.custs <- function (data,cols,n=5) { #Requires some data frame and the top N to remove

idx.to.remove <-integer(0) #Initialize a vector to hold customers being removed

for (c in cols){ # For every column in the data we passed to this function

col.order <-order(data[,c],decreasing=T) #Sort column “c” in descending order (bigger on top)

#Order returns the sorted index (e.g. row 15, 3, 7, 1, …) rather than the actual values sorted.

idx <-head(col.order, n) #Take the first n of the sorted column C to

idx.to.remove <-union(idx.to.remove,idx) #Combine and de-duplicate the row ids that need to be removed

}

return(idx.to.remove) #Return the indexes of customers to be removed

}

top.custs <-top.n.custs(data,cols=3:8,n=5)

length(top.custs) #How Many Customers to be Removed?

data[top.custs,] #Examine the customers

data.rm.top <-data[-c(top.custs),] #Remove the Customers

set.seed(76964057) #Set the seed for reproducibility

k <-kmeans(data.rm.top[,-c(1,2)], centers=5) #Create 5 clusters, Remove columns 1 and 2

k$centers #Display cluster centers

table(k$cluster) #Give a count of data points in each cluster

rng<-2:20 #K from 2 to 20

tries<-100 #Run the K Means algorithm 100 times

avg.totw.ss<-integer(length(rng)) #Set up an empty vector to hold all of points

for(v in rng){ # For each value of the range variable

v.totw.ss<-integer(tries) #Set up an empty vector to hold the 100 tries

for(i in 1:tries){

k.temp<-kmeans(data.rm.top,centers=v) #Run kmeans

v.totw.ss[i]<-k.temp$tot.withinss#Store the total withinss

}

avg.totw.ss[v-1]<-mean(v.totw.ss) #Average the 100 total withinss

}

plot(rng,avg.totw.ss,type=”b”, main=”Total Within SS by Various K”,

ylab=”Average Total Within Sum of Squares”,

xlab=”Value of K”)

upGrad’s Exclusive Data Science Webinar for you –

Watch our Webinar on The Future of Consumer Data in an Open Data Economy

 

Conclusion

We hope you liked this guide. We’ve tried to keep it concise and comprehensive. If you have any questions about the K-means algorithm, feel free to ask us. We’d love to answer your queries. 

If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG Program in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1What are some of the disadvantages of using K-means?

Outliers can pull centroids, or outliers may be given their own cluster rather than being disregarded. As K-means is stochastic, it cannot ensure that the global optimal clustering solution will be found. In reality, outliers and noisy data might make the algorithm highly sensitive. Before grouping, consider eliminating or cutting outliers. When grouping data with variable sizes and densities, K-means has difficulties. You must generalize K-means to cluster such data. Even if they clearly belong to the same cluster, the k-means algorithm does not allow data points that are far apart to share the same cluster.

2What is the elbow method in K-means?

The k-means method relies heavily on finding the appropriate number of clusters. The Elbow Approach is a widely used method for determining the best K value. The elbow technique performs K-means clustering on the dataset for a range of K values on the graph, and then computes an average score for all clusters for each value of K. The distortion score, which is the sum of square distances from each point to its assigned center, is computed by default. Other data-driven models, such as the number of main components to characterize a data set, can utilize the same technique to determine the number of parameters.

3How can we find outliers in K-means?

Outliers in K-Means clustering may be discovered using both a distance-based and a cluster-based technique. Outliers are discovered using dendrograms in the case of hierarchical clustering. The project's objective is to discover and eliminate outliers in order to make clustering more accurate. The data is partitioned into K groups by allocating them to the nearest cluster centers in the K-means based outlier identification approach. We may then calculate the distance or dissimilarity between each item and its cluster center, and choose the outliers with the greatest distances. Because extreme values may quickly impact a mean, the K-means clustering method is sensitive to outliers.

Explore Free Courses

Suggested Blogs

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
77938
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

19 Feb 2024

Sorting in Data Structure: Categories &#038; Types [With Examples]
135245
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search through it quickly and e
Read More

by Rohit Sharma

19 Feb 2024

Data Science Vs Data Analytics: Difference Between Data Science and Data Analytics
66721
Summary: In this article, you will learn, Difference between Data Science and Data Analytics Job roles Skills Career perspectives Which one is right
Read More

by Rohit Sharma

19 Feb 2024

13 Exciting Python Projects on Github You Should Try Today [2023]
44208
Python is one of the top choices in programming languages among professionals worldwide. Its straightforward syntax allows software developers and dat
Read More

by Hemant

19 Feb 2024

Top 15 Python AI &#038; Machine Learning Open Source Projects
35813
Machine learning and artificial intelligence are some of the most advanced topics to learn. So you must employ the best learning methods to make sure
Read More

by Pavan Vadapalli

19 Feb 2024

Top 21 Python Developer Skills You Must Need To Become a Successful Python Developer
77984
Its intuitive syntax, extensive libraries and versatile integration capabilities have fueled incredible growth across web development and scientific c
Read More

by Rohit Sharma

19 Feb 2024

Most Frequently Asked NumPy Interview Questions and Answers [For Freshers]
28452
If you are looking to have a glorious career in the technological sphere, you already know that a qualification in NumPy is one of the most sought-aft
Read More

by Rohit Sharma

19 Feb 2024

Top 30 Python Pattern Programs You Must Know About
32773
Summary Pattern in Python or “Python patterns” is an essential part of Python programming, especially when you are just starting out with using algor
Read More

by Rohit Sharma

19 Feb 2024

Top 12 Fascinating Python Applications in Real-World [2024]
154372
It is a well-established fact that Python is one of the most popular programming languages in both the coding and Data Science communities. But have y
Read More

by Rohit Sharma

19 Feb 2024

Want to build a career in Data Science?Download Career Growth Report
icon
footer sticky close icon