Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconArtificial Intelligencebreadcumb forward arrow iconBayes Theorem in Machine Learning: Introduction, How to Apply & Example

Bayes Theorem in Machine Learning: Introduction, How to Apply & Example

Last updated:
4th Feb, 2021
Read Time
9 Mins
share image icon
In this article
Chevron in toc
View All
Bayes Theorem in Machine Learning: Introduction, How to Apply & Example

Introduction: What is Bayes Theorem?

Bayes Theorem is named for English mathematician Thomas Bayes, who worked extensively in decision theory, the field of mathematics that involves probabilities. Bayes Theorem is also used widely in machine learning, where it is a simple, effective way to predict classes with precision and accuracy. The Bayesian method of calculating conditional probabilities is used in machine learning applications that involve classification tasks.

A simplified version of the Bayes Theorem, known as the Naive Bayes Classification, is used to reduce computation time and costs. In this article, we take you through these concepts and discuss the applications of the Bayes Theorem in machine learning. 

Join the machine learning course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.

Why use Bayes Theorem in Machine Learning?

Bayes Theorem is a method to determine conditional probabilities – that is, the probability of one event occurring given that another event has already occurred. Because a conditional probability includes additional conditions – in other words, more data – it can contribute to more accurate results.

Ads of upGrad blog

Thus, conditional probabilities are a must in determining accurate predictions and probabilities in Machine Learning. Given that the field is becoming ever more ubiquitous across a variety of domains, it is important to understand the role of algorithms and methods like Bayes Theorem in Machine Learning.

Before we go into the theorem itself, let’s understand some terms through an example. Say a  bookstore manager has information about his customers’ age and income. He wants to know how book sales are distributed across three age-classes of customers: youth (18-35), middle-aged (35-60), and seniors (60+). 

Let us term our data X. In Bayesian terminology, X is called evidence. We have some hypothesis H, where we have some X that belongs to a certain class C.

Our goal is to determine the conditional probability of our hypothesis H given X, i.e., P(H | X).

In simple terms, by determining P(H | X), we get the probability of X belonging to class C, given X. X has attributes of age and income – let’s say, for instance, 26 years old with an income of $2000. H is our hypothesis that the customer will buy the book.

Must Read: Free nlp online course!

Pay close attention to the following four terms:

  1. Evidence – As discussed earlier, P(X) is known as evidence. It is simply the probability that the customer will, in this case, be of age 26, earning $2000.
  2. Prior Probability – P(H), known as the prior probability, is the simple probability of our hypothesis – namely, that the customer will buy a book. This probability will not be provided with any extra input based on age and income. Since the calculation is done with lesser information, the result is less accurate.
  3. Posterior Probability – P(H | X) is known as the posterior probability. Here, P(H | X) is the probability of the customer buying a book (H) given X (that he is 26 years old and earns $2000). 
  4. Likelihood – P(X | H) is the likelihood probability. In this case, given that we know the customer will buy the book, the likelihood probability is the probability that the customer is of age 26 and has an income of $2000.

Given these, Bayes Theorem states:

P(H | X) = [ P(X | H) * P(H) ] / P(X)

Note the appearance of the four terms above in the theorem – posterior probability, likelihood probability, prior probability, and evidence. 

Read: Naive Bayes Explained

How to Apply Bayes Theorem in Machine Learning

The Naive Bayes Classifier, a simplified version of the Bayes Theorem, is used as a classification algorithm to classify data into various classes with accuracy and speed. 

Let’s see how the Naive Bayes Classifier can be applied as a classification algorithm. 

  1. Consider a general example: X is a vector consisting of ‘n’ attributes, that is, X = {x1, x2, x3, …, xn}.
  2. Say we have ‘m’ classes {C1, C2, …, Cm}. Our classifier will have to predict X belongs to a certain class. The class delivering the highest posterior probability will be chosen as the best class. So mathematically, the classifier will predict for class Ci iff P(Ci | X) > P(Cj | X). Applying Bayes Theorem:

P(Ci | X) = [ P(X | Ci) * P(Ci) ] / P(X)

  1. P(X), being condition-independent, is constant for each class. So to maximize P(Ci | X), we must maximize [P(X | Ci) * P(Ci)]. Considering every class is equally likely, we have P(C1) = P(C2) = P(C3) … = P(Cn). So ultimately, we need to maximize only P(X | Ci). 
  2. Since the typical large dataset is likely to have several attributes, it is computationally expensive to perform the P(X | Ci) operation for each attribute. This is where class-conditional independence comes in to simplify the problem and reduce computation costs. By class-conditional independence, we mean that we consider the attribute’s values to be independent of one another conditionally. This is the Naive Bayes Classification. 

P(Xi | C) = P(x1 | C) * P(x2 | C) *… * P(xn | C)

It is now easy to compute the smaller probabilities. One important thing to note here: since xk belongs to each attribute, we also need to check whether the attribute we are dealing with is categorical or continuous.

  1. If we have a categorical attribute, things are simpler. We can just count the number of instances of class Ci consisting of the value xk for attribute k and then divide that by the number of instances of class Ci.
  2. If we have a continuous attribute, considering we have a normal distribution function, we apply the following formula, with mean ? and standard deviation ?:


Ultimately, we will have P(x | Ci) = F(xk, ?k, ?k).

Now, we have all the values we need to use Bayes Theorem for each class Ci. Our predicted class will be the class achieving the highest probability P(X | Ci) * P(Ci).

Best Machine Learning and AI Courses Online

Example: Predictively Classifying Customers of a Bookstore

We have the following dataset from a bookstore:


We have attributes like age, income, student, and credit rating. Our class, buys_book, has two outcomes: Yes or No. 

Our goal is to classify based on the following attributes:

X = {age = youth, student = yes, income = medium, credit_rating = fair}.

As we showed earlier, to maximize P(Ci | X), we need to maximize [ P(X | Ci) * P(Ci) ] for i = 1 and i = 2.

Hence, P(buys_book = yes) = 9/14 = 0.643

P(buys_book = no) = 5/14 = 0.357

P(age = youth | buys_book = yes) = 2/9 = 0.222

P(age = youth | buys_book = no) =3/5 = 0.600

P(income = medium | buys_book = yes) = 4/9 = 0.444

P(income = medium | buys_book = no) = 2/5 = 0.400

P(student = yes | buys_book = yes) = 6/9 = 0.667

P(student = yes | buys_book = no) = 1/5 = 0.200

P(credit_rating = fair | buys_book = yes) = 6/9 = 0.667

P(credit_rating = fair | buys_book = no) = 2/5 = 0.400

Using the above-calculated probabilities, we have

P(X | buys_book = yes) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044


P(X | buys_book = no) = 0.600 x 0.400 x 0.200 x 0.400 = 0.019

Which class does Ci provide the maximum P(X|Ci)*P(Ci)? We compute:

P(X | buys_book = yes)* P(buys_book = yes) = 0.044 x 0.643 = 0.028

P(X | buys_book = no)* P(buys_book = no) = 0.019 x 0.357 = 0.007

Comparing the above two, since 0.028 > 0.007, the Naive Bayes Classifier predicts that the customer with the above-mentioned attributes will buy a book.

Checkout: Machine Learning Project Ideas & Topics

In-demand Machine Learning Skills

Is the Bayesian Classifier a Good Method?

Ads of upGrad blog

Algorithms based on Bayes Theorem in machine learning provide results comparable to other algorithms, and Bayesian classifiers are generally considered simple high-accuracy methods. However, care should be taken to remember that Bayesian classifiers are particularly appropriate where the assumption of class-conditional independence is valid, and not across all cases. Another practical concern is that acquiring all the probability data may not always be feasible. 

Popular AI and ML Blogs & Free Courses


Bayes Theorem has many applications in machine learning, particularly in classification-based problems. Applying this family of algorithms in machine learning involves familiarity with terms such as prior probability and posterior probability. In this article, we discussed the basics of the Bayes Theorem, its use in machine learning problems, and worked through a classification example.

Since Bayes Theorem forms a crucial part of classification-based algorithms in Machine Learning, you can learn more about upGrad’s Advanced Certificate Programme in Machine Learning & NLP. This course has been crafted keeping in mind various kinds of students interested in Machine Learning, offering 1-1 mentorship and much more.


Pavan Vadapalli

Blog Author
Director of Engineering @ upGrad. Motivated to leverage technology to solve problems. Seasoned leader for startups and fast moving orgs. Working on solving problems of scale and long term technology strategy.
Get Free Consultation

Selectcaret down icon
Select Area of interestcaret down icon
Select Work Experiencecaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Machine Learning Course

Frequently Asked Questions (FAQs)

1Why do we use Bayes theorem in Machine Learning?

The Bayes Theorem is a method for calculating conditional probabilities, or the likelihood of one event occurring if another has previously occurred. A conditional probability can lead to more accurate outcomes by including extra conditions — in other words, more data. In order to obtain correct estimations and probabilities in Machine Learning, conditional probabilities are required. Given the field's increasing prevalence across a wide range of domains, it's critical to comprehend the importance of algorithms and approaches like Bayes Theorem in Machine Learning.

2Is Bayesian Classifier a good choice?

In machine learning, algorithms based on the Bayes Theorem produce results that are comparable to those of other methods, and Bayesian classifiers are widely regarded as simple high-accuracy approaches. However, it's important to keep in mind that Bayesian classifiers are best used when the condition of class-conditional independence is correct, not in all circumstances. Another consideration is that obtaining all of the likelihood data may not always be possible.

3How can Bayes theorem be applied practically?

The Bayes theorem calculates the likelihood of occurrence based on new evidence that is or could be related to it. The method can also be used to see how hypothetical new information affects the likelihood of an event, assuming the new information is true. Take, for example, a single card selected from a deck of 52 cards. The probability of the card becoming a king is 4 divided by 52, or 1/13, or roughly 7.69 percent. Keep in mind that the deck contains four kings. Let's say it's revealed that the chosen card is a face card. Because there are 12 face cards in a deck, the probability that the picked card is a king is 4 divided by 12, or roughly 33.3 percent.

Explore Free Courses

Suggested Blogs

Artificial Intelligence course fees
Artificial intelligence (AI) was one of the most used words in 2023, which emphasizes how important and widespread this technology has become. If you
Read More

by venkatesh Rajanala

29 Feb 2024

Artificial Intelligence in Banking 2024: Examples & Challenges
Introduction Millennials and their changing preferences have led to a wide-scale disruption of daily processes in many industries and a simultaneous g
Read More

by Pavan Vadapalli

27 Feb 2024

Top 9 Python Libraries for Machine Learning in 2024
Machine learning is the most algorithm-intense field in computer science. Gone are those days when people had to code all algorithms for machine learn
Read More

by upGrad

19 Feb 2024

Top 15 IoT Interview Questions & Answers 2024 – For Beginners & Experienced
These days, the minute you indulge in any technology-oriented discussion, interview questions on cloud computing come up in some form or the other. Th
Read More

by Kechit Goyal

19 Feb 2024

Data Preprocessing in Machine Learning: 7 Easy Steps To Follow
Summary: In this article, you will learn about data preprocessing in Machine Learning: 7 easy steps to follow. Acquire the dataset Import all the cr
Read More

by Kechit Goyal

18 Feb 2024

Artificial Intelligence Salary in India [For Beginners & Experienced] in 2024
Artificial Intelligence (AI) has been one of the hottest buzzwords in the tech sphere for quite some time now. As Data Science is advancing, both AI a
Read More

by upGrad

18 Feb 2024

40 Best IoT Project Ideas & Topics For Beginners 2024 [Latest]
In this article, you will learn the 24 Exciting IoT Project Ideas & Topics. Take a glimpse at the project ideas listed below. Best Simple IoT Proj
Read More

by Kechit Goyal

18 Feb 2024

Natural Language Processing (NLP) Projects & Topics For Beginners [2023]
What are Natural Language Processing Projects? NLP project ideas advanced encompass various applications and research areas that leverage computation
Read More

by Pavan Vadapalli

17 Feb 2024

45+ Interesting Machine Learning Project Ideas For Beginners [2024]
Summary: In this Article, you will learn Stock Prices Predictor Sports Predictor Develop A Sentiment Analyzer Enhance Healthcare Prepare ML Algorith
Read More

by Jaideep Khare

16 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
footer sticky close icon