If you’ve been studying data mining for some time, you must have heard of the term ‘Bayesian classification’. Do you wonder what it means and how important it is as a concept in data mining?
This article will answer these questions as you’ll explore what Bayesian classification in data mining is. Let’s begin:
What is Bayesian Classification?
During data mining, you’ll find the connection between the class variable and the attribute set to be non-deterministic. This means we can’t assume the class label of a test record with absolute certainty even if the attribute set is the same as the training examples.
It could happen because of the presence of particular influencing factors or noisy data. Suppose you want to predict whether a person is at risk of heart disease according to their eating habits. While the eating habits of a person are a huge factor in determining whether they will suffer from heart problems or not, there can be other reasons for the occurrence of the same too such as genetics or infection.
So, your analysis in determining if the person would be at risk of heart diseases based on their eating habits alone would be flawed and could cause multiple issues to arise.
Then the question arises, “How do you solve this problem in data mining?” The answer is the Bayesian classification.
You can use Bayesian classification in data mining to tackle this issue and predict the occurrence of any event. Bayesian classifiers consist of statistical classifiers using Bayesian probability understandings.
To understand the workings of Bayesian classification in data mining, you’ll have to start with the Bayes theorem.
The credit for Bayes theorem goes to Thomas Bayes who used conditional probability to create an algorithm that utilises evidence for calculating limits on unknown parameters. He was the first person to come up with this solution.
Mathematically, the Bayes theorem looks like this:
P(A/B) = P(B/A)P(A)P(B)
Here, A and B represent the events and P(B) cannot be equal to zero.
P(B/A) is a conditional probability that explains the occurrence of event B when A is true. Similarly, P(A/B) is a conditional probability that explains the occurrence of event A when B is true.
P(B) and P(A) are the probabilities of observing B and A independently and they are called marginal probabilities.
In Bayesian interpretation, probability calculates a degree of belief. According to the Bayes theorem, the degree of belief in a hypothesis before considering the evidence is connected to the degree of belief in a hypothesis after considering the same.
Suppose you have a coin. If you flip the coin once, you’ll either get heads or tails and the probability of both of their occurrences is 50%. However, if you flip the coin several times and observe the results, the degree of belief might increase, decrease or remain steady based on the results.
If you have proposition A and evidence B then:
P(A) is the primary degree of belief in A. P(A/B) is the posterior degree of belief after accounting for B. The quotient P(B/A)/P(B) shows the support B offers for A.
You can derive the Bayes theorem from the conditional probability:
P(A/B) =P(AB)P(B), if P(B) 0
P(B/A) = P(BA)P(A) , if P(A) 0
Here P(AB)is the joint probability of both A and B being true because:
P (BA) = P(AB)
OR, P(AB) = P(AB)P(B) = P(BA)P(A)
OR, P(AB) = P(BA)P(A)P(B), IF P(B) 0
We use Bayesian networks (also known as Belief networks) to show uncertainties through DAGs (Directed Acyclic Graphs). A Directed Acyclic Graph shows a Bayesian Network like any other statistical graph. It contains a group of nodes and links where the links denote the connection between the respective nodes.
Every node in a Directed Acyclic graph represents a random variable. The variables can be continuous or discrete values and may correspond to the actual attribute given to the data.
A Bayesian network enables class conditional independencies to be defined between variable subsets. It gives you a graphical model of the relationship on which you would perform implementations.
Apart from DAG, a Bayesian network also has a set of conditional probability tables.
By now you must be familiar with the basics of Bayesian classification in data mining. Understanding the theorem behind the applications of data mining implementations is vital for making progress.
What do you think of Bayesian classification in data mining? Have you tried implementing it? Share your answers in the comments. We’d love to hear from you.
If you are curious to learn about data science, check out IIIT-B & upGrad’s PG Diploma in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.