Naive Bayes is a machine learning algorithm we use to solve classification problems. It is based on the Bayes Theorem. It is one of the simplest yet powerful ML algorithms in use and finds applications in many industries.
Suppose you have to solve a classification problem and have created the features and generated the hypothesis, but your superiors want to see the model. You have numerous data points (lakhs of data points) and many variables to train the dataset. The best solution for this situation would be to use the Naive Bayes classifier, which is quite faster in comparison to other classification algorithms.
In this article, we’ll discuss this algorithm in detail and find out how it works. We’ll also discuss its advantages and disadvantages along with its real-world applications to understand how essential this algorithm is.
Join the Machine Learning Course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.
Let’s get started:
Naive Bayes Explained
Naive Bayes uses the Bayes’ Theorem and assumes that all predictors are independent. In other words, this classifier assumes that the presence of one particular feature in a class doesn’t affect the presence of another one.
Here’s an example: you’d consider fruit to be orange if it is round, orange, and is of around 3.5 inches in diameter. Now, even if these features require each other to exist, they all contribute independently to your assumption that this particular fruit is orange. That’s why this algorithm has ‘Naive’ in its name.
Building the Naive Bayes model is quite simple and helps you in working with vast datasets. Moreover, this equation is popular for beating many advanced classification techniques in terms of performance.
Here’s the equation for Naive Bayes:
P (c|x) = P(x|c) P(c) / P(x)
P(c|x) = P(x1 | c) x P(x2 | c) x … P(xn | c) x P(c)
Here, P (c|x) is the posterior probability according to the predictor (x) for the class(c). P(c) is the prior probability of the class, P(x) is the prior probability of the predictor, and P(x|c) is the probability of the predictor for the particular class(c).
Apart from considering the independence of every feature, Naive Bayes also assumes that they contribute equally. This is an important point to remember.
Must Read: Free nlp online course!
How does Naive Bayes Work?
To understand how Naive Bayes works, we should discuss an example.
Suppose we want to find stolen cars and have the following dataset:
Serial No. | Color | Type | Origin | Was it Stolen? |
1 | Red | Sports | Domestic | Yes |
2 | Red | Sports | Domestic | No |
3 | Red | Sports | Domestic | Yes |
4 | Yellow | Sports | Domestic | No |
5 | Yellow | Sports | Imported | Yes |
6 | Yellow | SUV | Imported | No |
7 | Yellow | SUV | Imported | Yes |
8 | Yellow | SUV | Domestic | No |
9 | Red | SUV | Imported | No |
10 | Red | Sports | Imported | Yes |
According to our dataset, we can understand that our algorithm makes the following assumptions:
- It assumes that every feature is independent. For example, the colour ‘Yellow’ of a car has nothing to do with its Origin or Type.
- It gives every feature the same level of importance. For example, knowing only the Color and Origin would predict the outcome correctly. That’s why every feature is equally important and contributes equally to the result.
Now, with our dataset, we have to classify if thieves steal a car according to its features. Each row has individual entries, and the columns represent the features of every car. In the first row, we have a stolen Red Sports Car with Domestic Origin. We’ll find out if thieves would steal a Red Domestic SUV or not (our dataset doesn’t have an entry for a Red Domestic SUV).
We can rewrite the Bayes Theorem for our example as:
P(y | X) = [P(X | y) P(y)P(X)]/P(X)
Here, y stands for the class variable (Was it Stolen?) to show if the thieves stole the car not according to the conditions. X stands for the features.
X = x1, x2, x3, …., xn)
Here, x1, x2,…, xn stand for the features. We can map them to be Type, Origin, and Color. Now, we’ll replace X and expand the chain rule to get the following:
P(y | x1, …, xn) = [P(x1 | y) P(x2 | y) … P(xn | y) P(y)]/[P(x1) P (x2) … P(xn)]
You can get the values for each by using the dataset and putting their values in the equation. The denominator will remain static for every entry in the dataset to remove it and inject proportionality.
P(y | x1, …, xn) ∝ P(y) i = 1nP(xi | y)
In our example, y only has two outcomes, yes or no.
y = argmaxyP(y) i = 1nP(xi | y)
We can create a Frequency Table to calculate the posterior probability P(y|x) for every feature. Then, we’ll mould the frequency tables to Likelihood Tables and use the Naive Bayesian equation to find every class’s posterior probability. The result of our prediction would be the class that has the highest posterior probability. Here are the Likelihood and Frequency Tables:
Frequency Table of Color:
Color | Was it Stolen (Yes) | Was it Stolen (No) |
Red | 3 | 2 |
Yellow | 2 | 3 |
Likelihood Table of Color:
Color | Was it Stolen [P(Yes)] | Was it Stolen [P(No)] |
Red | 3/5 | 2/5 |
Yellow | 2/5 | 3/5 |
Frequency Table of Type:
Type | Was it Stolen (Yes) | Was it Stolen (No) |
Sports | 4 | 2 |
SUV | 1 | 3 |
Likelihood Table of Type:
Type | Was it Stolen [P(Yes)] | Was it Stolen [P(No)] |
Sports | 4/5 | 2/5 |
SUV | 1/5 | 3/5 |
Frequency Table of Origin:
Origin | Was it Stolen (Yes) | Was it Stolen (No) |
Domestic | 2 | 3 |
Imported | 3 | 2 |
Likelihood Table of Origin:
Origin | Was it Stolen [P(Yes)] | Was it Stolen [P(No)] |
Domestic | 2/5 | 3/5 |
Imported | 3/5 | 2/5 |
Our problem has 3 predictors for X, so according to the equations we saw previously, the posterior probability P(Yes | X) would be as following:
P(Yes | X) = P(Red | Yes) * P(SUV | Yes) * P(Domestic | Yes) * P(Yes)
= ⅗ x ⅕ x ⅖ x 1
= 0.048
P(No | X) would be:
P(No | X) = P(Red | No) * P(SUV | No) * P(Domestic | No) * P(No)
= ⅖ x ⅗ x ⅗ x 1
= 0.144
So, as the posterior probability P(No | X) is higher than the posterior probability P(Yes | X), our Red Domestic SUV will have ‘No’ in the ‘Was it stolen?’ section.
Best Machine Learning Courses & AI Courses Online
The example should have shown you how the Naive Bayes Classifier works. To get a better picture of Naive Bayes explained, we should now discuss its advantages and disadvantages:
Advantages and Disadvantages of Naive Bayes
Advantages
- This algorithm works quickly and can save a lot of time.
- Naive Bayes is suitable for solving multi-class prediction problems.
- If its assumption of the independence of features holds true, it can perform better than other models and requires much less training data.
- Naive Bayes is better suited for categorical input variables than numerical variables.
Disadvantages
- Naive Bayes assumes that all predictors (or features) are independent, rarely happening in real life. This limits the applicability of this algorithm in real-world use cases.
- This algorithm faces the ‘zero-frequency problem’ where it assigns zero probability to a categorical variable whose category in the test data set wasn’t available in the training dataset. It would be best if you used a smoothing technique to overcome this issue.
- Its estimations can be wrong in some cases, so you shouldn’t take its probability outputs very seriously.
Checkout: Machine Learning Models Explained
Applications of Naive Bayes Explained
Here are some areas where this algorithm finds applications:
Text Classification
Most of the time, Naive Bayes finds uses in-text classification due to its assumption of independence and high performance in solving multi-class problems. It enjoys a high rate of success than other algorithms due to its speed and efficiency.
In-demand Machine Learning Skills
Sentiment Analysis
One of the most prominent areas of machine learning is sentiment analysis, and this algorithm is quite useful there as well. Sentiment analysis focuses on identifying whether the customers think positively or negatively about a certain topic (product or service).
Recommender Systems
With the help of Collaborative Filtering, Naive Bayes Classifier builds a powerful recommender system to predict if a user would like a particular product (or resource) or not. Amazon, Netflix, and Flipkart are prominent companies that use recommender systems to suggest products to their customers.
Popular AI and ML Blogs & Free Courses
Learn More Machine Learning Algorithms
Naive Bayes is a simple and effective machine learning algorithm for solving multi-class problems. It finds uses in many prominent areas of machine learning applications such as sentiment analysis and text classification.
Check out Master of Science in Machine Learning & AI with IIIT Bangalore, the best engineering school in the country to create a program that teaches you not only machine learning but also the effective deployment of it using the cloud infrastructure. Our aim with this program is to open the doors of the most selective institute in the country and give learners access to amazing faculty & resources in order to master a skill that is in high & growing
What is naïve bayes algorithm?
To handle categorization difficulties, we employ the Naive Bayes machine learning technique. The Bayes Theorem underpins it. It is one of the most basic yet powerful machine learning algorithms in use, with applications in a variety of industries. Let's say you're working on a classification problem and you've already established the features and hypothesis, but your boss wants to see the model. To train the dataset, you have a large number of data points (thousands of data points) and a large number of variables. The Naive Bayes classifier, which is much faster than other classification algorithms, would be the best option in this circumstance.
What are some advantages and disadvantages of naïve bayes?
For multi-class prediction issues, Naive Bayes is a good choice. If the premise of feature independence remains true, it can outperform other models while using far less training data. Categorical input variables are more suited to Naive Bayes than numerical input variables.
In Naive Bayes, all predictors (or traits) are assumed to be independent, which is rarely the case in real life. This limits the algorithm's usability in real-world scenarios. You shouldn't take its probability outputs seriously because its estimations can be off in some instances.
What are some real-world application of naïve bayes?
Because of its premise of autonomy and high performance in addressing multi-class problems, Naive Bayes is frequently used in-text classification. Sentiment analysis is one of the most popular applications of machine learning, and this technique can help with that as well. The goal of sentiment analysis is to determine whether customers have favorable or negative feelings about a particular issue (product or service). Naive Bayes Classifier uses Collaborative Filtering to create a sophisticated recommender system that can predict whether or not a user will enjoy a given product (or resource).
