Introduction: What is Bayes Theorem?
Bayes Theorem is named for English mathematician Thomas Bayes, who worked extensively in decision theory, the field of mathematics that involves probabilities. Bayes Theorem is also used widely in machine learning, where it is a simple, effective way to predict classes with precision and accuracy. The Bayesian method of calculating conditional probabilities is used in machine learning applications that involve classification tasks.
A simplified version of the Bayes Theorem, known as the Naive Bayes Classification, is used to reduce computation time and costs. In this article, we take you through these concepts and discuss the applications of the Bayes Theorem in machine learning.Â
Join the machine learning course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.
Why use Bayes Theorem in Machine Learning?
Bayes Theorem is a method to determine conditional probabilities – that is, the probability of one event occurring given that another event has already occurred. Because a conditional probability includes additional conditions – in other words, more data – it can contribute to more accurate results.
Thus, conditional probabilities are a must in determining accurate predictions and probabilities in Machine Learning. Given that the field is becoming ever more ubiquitous across a variety of domains, it is important to understand the role of algorithms and methods like Bayes Theorem in Machine Learning.
Before we go into the theorem itself, let’s understand some terms through an example. Say a bookstore manager has information about his customers’ age and income. He wants to know how book sales are distributed across three age-classes of customers: youth (18-35), middle-aged (35-60), and seniors (60+).Â
Let us term our data X. In Bayesian terminology, X is called evidence. We have some hypothesis H, where we have some X that belongs to a certain class C.
Our goal is to determine the conditional probability of our hypothesis H given X, i.e., P(H | X).
In simple terms, by determining P(H | X), we get the probability of X belonging to class C, given X. X has attributes of age and income – let’s say, for instance, 26 years old with an income of $2000. H is our hypothesis that the customer will buy the book.
Must Read: Free nlp online course!
Pay close attention to the following four terms:
- Evidence – As discussed earlier, P(X) is known as evidence. It is simply the probability that the customer will, in this case, be of age 26, earning $2000.
- Prior Probability – P(H), known as the prior probability, is the simple probability of our hypothesis – namely, that the customer will buy a book. This probability will not be provided with any extra input based on age and income. Since the calculation is done with lesser information, the result is less accurate.
- Posterior Probability – P(H | X) is known as the posterior probability. Here, P(H | X) is the probability of the customer buying a book (H) given X (that he is 26 years old and earns $2000).Â
- Likelihood – P(X | H) is the likelihood probability. In this case, given that we know the customer will buy the book, the likelihood probability is the probability that the customer is of age 26 and has an income of $2000.
Given these, Bayes Theorem states:
P(H | X) = [ P(X | H) * P(H) ] / P(X)
Note the appearance of the four terms above in the theorem – posterior probability, likelihood probability, prior probability, and evidence.Â
Read: Naive Bayes Explained
How to Apply Bayes Theorem in Machine Learning
The Naive Bayes Classifier, a simplified version of the Bayes Theorem, is used as a classification algorithm to classify data into various classes with accuracy and speed.Â
Let’s see how the Naive Bayes Classifier can be applied as a classification algorithm.Â
- Consider a general example: X is a vector consisting of ‘n’ attributes, that is, X = {x1, x2, x3, …, xn}.
- Say we have ‘m’ classes {C1, C2, …, Cm}. Our classifier will have to predict X belongs to a certain class. The class delivering the highest posterior probability will be chosen as the best class. So mathematically, the classifier will predict for class Ci iff P(Ci | X) > P(Cj | X). Applying Bayes Theorem:
P(Ci | X) = [ P(X | Ci) * P(Ci) ] / P(X)
- P(X), being condition-independent, is constant for each class. So to maximize P(Ci | X), we must maximize [P(X | Ci) * P(Ci)]. Considering every class is equally likely, we have P(C1) = P(C2) = P(C3) … = P(Cn). So ultimately, we need to maximize only P(X | Ci).Â
- Since the typical large dataset is likely to have several attributes, it is computationally expensive to perform the P(X | Ci) operation for each attribute. This is where class-conditional independence comes in to simplify the problem and reduce computation costs. By class-conditional independence, we mean that we consider the attribute’s values to be independent of one another conditionally. This is the Naive Bayes Classification.Â
P(Xi | C) = P(x1 | C) * P(x2 | C) *… * P(xn | C)
It is now easy to compute the smaller probabilities. One important thing to note here: since xk belongs to each attribute, we also need to check whether the attribute we are dealing with is categorical or continuous.
- If we have a categorical attribute, things are simpler. We can just count the number of instances of class Ci consisting of the value xk for attribute k and then divide that by the number of instances of class Ci.
- If we have a continuous attribute, considering we have a normal distribution function, we apply the following formula, with mean ? and standard deviation ?:
Ultimately, we will have P(x | Ci) = F(xk, ?k, ?k).
Now, we have all the values we need to use Bayes Theorem for each class Ci. Our predicted class will be the class achieving the highest probability P(X | Ci) * P(Ci).
Best Machine Learning Courses & AI Courses Online
Example: Predictively Classifying Customers of a Bookstore
We have the following dataset from a bookstore:
Age | Income | Student | Credit_Rating | Buys_Book |
Youth | High | No | Fair | No |
Youth | High | No | Excellent | No |
Middle_aged | High | No | Fair | Yes |
Senior | Medium | No | Fair | Yes |
Senior | Low | Yes | Fair | Yes |
Senior | Low | Yes | Excellent | No |
Middle_aged | Low | Yes | Excellent | Yes |
Youth | Medium | No | Fair | No |
Youth | Low | Yes | Fair | Yes |
Senior | Medium | Yes | Fair | Yes |
Youth | Medium | Yes | Excellent | Yes |
Middle_aged | Medium | No | Excellent | Yes |
Middle_aged | High | Yes | Fair | Yes |
Senior | Medium | No | Excellent | No |
We have attributes like age, income, student, and credit rating. Our class, buys_book, has two outcomes: Yes or No.Â
Our goal is to classify based on the following attributes:
X = {age = youth, student = yes, income = medium, credit_rating = fair}.
As we showed earlier, to maximize P(Ci | X), we need to maximize [ P(X | Ci) * P(Ci) ] for i = 1 and i = 2.
Hence, P(buys_book = yes) = 9/14 = 0.643
P(buys_book = no) = 5/14 = 0.357
P(age = youth | buys_book = yes) = 2/9 = 0.222
P(age = youth | buys_book = no) =3/5 = 0.600
P(income = medium | buys_book = yes) = 4/9 = 0.444
P(income = medium | buys_book = no) = 2/5 = 0.400
P(student = yes | buys_book = yes) = 6/9 = 0.667
P(student = yes | buys_book = no) = 1/5 = 0.200
P(credit_rating = fair | buys_book = yes) = 6/9 = 0.667
P(credit_rating = fair | buys_book = no) = 2/5 = 0.400
Using the above-calculated probabilities, we have
P(X | buys_book = yes) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044
Similarly,
P(X | buys_book = no) = 0.600 x 0.400 x 0.200 x 0.400 = 0.019
Which class does Ci provide the maximum P(X|Ci)*P(Ci)? We compute:
P(X | buys_book = yes)* P(buys_book = yes) = 0.044 x 0.643 = 0.028
P(X | buys_book = no)* P(buys_book = no) = 0.019 x 0.357 = 0.007
Comparing the above two, since 0.028 > 0.007, the Naive Bayes Classifier predicts that the customer with the above-mentioned attributes will buy a book.
Checkout:Â Machine Learning Project Ideas & Topics
In-demand Machine Learning Skills
Is the Bayesian Classifier a Good Method?
Algorithms based on Bayes Theorem in machine learning provide results comparable to other algorithms, and Bayesian classifiers are generally considered simple high-accuracy methods. However, care should be taken to remember that Bayesian classifiers are particularly appropriate where the assumption of class-conditional independence is valid, and not across all cases. Another practical concern is that acquiring all the probability data may not always be feasible.Â
Popular AI and ML Blogs & Free Courses
Conclusion
Bayes Theorem has many applications in machine learning, particularly in classification-based problems. Applying this family of algorithms in machine learning involves familiarity with terms such as prior probability and posterior probability. In this article, we discussed the basics of the Bayes Theorem, its use in machine learning problems, and worked through a classification example.
Since Bayes Theorem forms a crucial part of classification-based algorithms in Machine Learning, you can learn more about upGrad’s Advanced Certificate Programme in Machine Learning & NLP. This course has been crafted keeping in mind various kinds of students interested in Machine Learning, offering 1-1 mentorship and much more.
Why do we use Bayes theorem in Machine Learning?
The Bayes Theorem is a method for calculating conditional probabilities, or the likelihood of one event occurring if another has previously occurred. A conditional probability can lead to more accurate outcomes by including extra conditions — in other words, more data. In order to obtain correct estimations and probabilities in Machine Learning, conditional probabilities are required. Given the field's increasing prevalence across a wide range of domains, it's critical to comprehend the importance of algorithms and approaches like Bayes Theorem in Machine Learning.
Is Bayesian Classifier a good choice?
In machine learning, algorithms based on the Bayes Theorem produce results that are comparable to those of other methods, and Bayesian classifiers are widely regarded as simple high-accuracy approaches. However, it's important to keep in mind that Bayesian classifiers are best used when the condition of class-conditional independence is correct, not in all circumstances. Another consideration is that obtaining all of the likelihood data may not always be possible.
How can Bayes theorem be applied practically?
The Bayes theorem calculates the likelihood of occurrence based on new evidence that is or could be related to it. The method can also be used to see how hypothetical new information affects the likelihood of an event, assuming the new information is true. Take, for example, a single card selected from a deck of 52 cards. The probability of the card becoming a king is 4 divided by 52, or 1/13, or roughly 7.69 percent. Keep in mind that the deck contains four kings. Let's say it's revealed that the chosen card is a face card. Because there are 12 face cards in a deck, the probability that the picked card is a king is 4 divided by 12, or roughly 33.3 percent.
