Introduction
Decision Tree Learning is a mainstream data mining technique and is a form of supervised machine learning. A decision tree is like a diagram using which people represent a statistical probability or find the course of happening, action, or the result. A decision tree example makes it more clearer to understand the concept.
Top Machine Learning and AI Courses Online
The branches in the diagram of a decision tree shows a likely outcome, possible decision, or reaction. The branch at the end of the decision tree displays the prediction or a result. Decision trees are usually used to find a solution for a problem which gets complicated to solve manually. Let us understand this in detail with the help of a few decision tree examples.
A decision tree is one of the popular as well as powerful tools which is used for prediction and classification of the data or an event. It is like a flowchart but having a structure of a tree. The internal nodes of the trees represent a test or a question on an attribute; each branch is the possible outcome of the question asked, and the terminal node, which is also called as the leaf node, denotes a class label.
In a decision tree, we have several predictor variables. Depending upon these predictor variables, try to predict the so-called response variable.
Trending Machine Learning Skills
Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
Related Read: Decision Tree Classification: Everything You Need to Know
Decision Tree in ML
By representing a few steps in the form of a sequence, the decision tree becomes an easy and effective way to understand and visualize the possible decision options and the potential outcomes from the range. The decision trees are also helpful in identifying possible options and weighing the rewards and risks against each course of action that can be yielded.
A decision tree is deployed in many small scale as well as large scale organizations as a sort of support system in making decisions. Since a decision tree example is a structured model, the readers can understand the chart and analyse how and why a particular option may lead to a corresponding decision. The decision tree example also allows the reader to predict and get multiple possible solutions for a single problem, understand the format, and the relation between different events and data with the decision.
Each result in the tree has a reward and risk number or weight assigned. If you ever use a decision tree, then you will have every final result with a possible drawback and benefit. To conclude your tree properly, you can span it as short or as long as needed depending on the event and the amount of data. Let us take a simple decision tree example to understand it better.
Consider the given data which consists of the details of people like: whether they are drinker, smoker, their weight, and the age at which these people died.
Name | Drinker | Smoker | Weight | Age (Died) |
Sam | Yes | Yes | 120 | 44 |
Mary | No | No | 70 | 96 |
Jonas | Yes | No | 72 | 88 |
Taylor | Yes | Yes | 55 | 52 |
Joe | No | Yes | 94 | 56 |
Harry | No | No | 62 | 93 |
Let us try to predict if the people will die at a younger age or older age. The characteristics like drinker, smoker, and the weight will act as a predictor value. Using these, we will consider age as a response variable.
Let us label that people who died before the age of 70 died “young” and people who died after the age of 70 died “old”. Let us now predict the response variable based on the predictor variable. Given below is a decision tree made after learning the data.
The decision tree above explains that, if a person is a smoker, they die young. If a person is not a smoker, then the next factor considered is if the person is a drinker or not. If a person is not a smoker and not a drinker, the person dies old.
If a person is not a smoker and is a drinker, then the weight of the person is considered. If a person is not a smoker, is a drinker, and weighs below 90 kg, then the person dies old. And lastly, if a person is not a smoker, is a drinker, and weighs above 90 kg, then they die young.
From the data given let’s take Jonas’ example to check if the decision tree is classified correctly and if it predicts the response variable correctly. Jonas is not a smoker, is a drinker, and weighs under 90 kg. According to the decision tree, he will die old (age at which he dies>70). Also, according to the data, he died when he was 88 years old, this means the decision tree example has been classified correctly and worked perfectly.
But did you ever wonder about the basic idea behind the working of a decision tree? In a decision tree, the set of instances is split into subsets in a manner that the variation in each subset gets smaller. That is, we want to reduce the entropy, and hence, the variation is reduced and the event or instance is tried to be made pure.
Let us consider a similar decision tree example. Firstly, we consider if the person is a smoker or not.
Here, we are uncertain about the non-smokers. So, we split it into drinker and nondrinker.
We can see from the diagram given below that we went from a high entropy having large variation to reducing it down to a smaller class in which we are more certain about. In this manner, you can incrementally build any decision tree example.
Let us construct a decision tree using the ID3 Algorithm. What is more important in the decision tree is a strong understanding of Entropy. Entropy is nothing but the degree of uncertainty. It is given by:
(At times, it is also denoted by “E”)
If we apply it to the above example, it will go as follow:
Consider the case when we don’t have people split into any category. It is a worst-case scenario (high entropy) when both types of people have the same amount. The ratio here is 3:3.
Similarly, for people who do not drink, have 1:1 ratio and the entropy would be 1. Thus, it needs a further split due to uncertainty. For people who do not drink, the ratio is 2:0. Hence, the entropy is 0.
Now, we have computed the entropy for the different cases and hence, we can calculate the weighted average for the same.
For the first branch, E=661=1
For the Smoker class, E=260+ 460.811=0.54
For the smoker and drinker class, E=260+ 261+260=0.33
The diagram below will help you in quickly understanding the above calculations.
Finally, the information gain:
Class | Entropy | Information gain (E2-E1) |
People | 1 | 0.46 |
Smoker | 0.54 | 0.21 |
Smoker+Drinker | 0.33 | – |
Also Read: Decision Tree Interview Questions & Answers
Popular AI and ML Blogs & Free Courses
Conclusion
We have successfully studied decision trees in-depth right from the theory to a practical decision tree example. We also constructed a decision tree using the ID3 algorithm. If you found this interesting, you might love to explore data science in detail.
If you’re interested to learn more about decision trees, machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.