Every person has to take decisions in their lives. These decisions are situation-dependent. Taking the right decision helps face a situation in the best manner, solving the problem in the most straightforward way. In childhood, most of your decisions would revolve around what you eat and things related to your school.
As you grow up, your decisions start having a more serious implication on not only your life but also those of others too. At some point in your life, you will be taking decisions concerning your career or business. This analogy is to introduce you to the concept of a decision tree in machine learning.
What is a decision tree?
To start with, let us tell you that a decision tree is a predictive model or tool that supports decisions. It is known to deliver accurate inferences by using designs, design models, or representations that follow a tree-like structure. The primary objective of this model or machine learning model is to consider certain attributes of a target, and then make decisions on the basis of those attributes.
Most of the decisions in a decision tree follow conditional statements – if and else. For a decision tree model to be better than others, it will have a deeper structure and more complex rules governing it. It is one of the most preferred supervised learning models in machine learning and is used in a number of areas. It could appear like a flowchart that is designed keeping in mind algorithmic techniques to ensure that the splitting is done according to conditions.
The structure of this flowchart is quite simple. It has a root node that serves as the foundation of the building of the model. Then, some internal nodes and branches show features or tests and outcomes of tests, respectively. The leaf node represents a group with values that are similar to those values that are achieved when decisions on related attribute are made.
Decisions trees primarily find their uses in classification and regression problems. They are used to create automated predictive models that serve more than a few applications in not only machine learning algorithm applications but also statistics, data science, and data mining amongst other areas. These tree-based structures deliver some of the most accurate predictive models that are both easily interpretable and more stable than most of the other predictive models.
Unlike linear models that are only good for a certain number of problems, models based on decision trees can be used in mapping non-linear relationships, too. No wonder decision trees are so popular. One very important reason for this is how easy to understand the final decision tree model is. It can quite clearly describe what all was behind a prediction. They are also the basis of the more advanced collaborative or ensemble methods, including gradient boosting, bagging, and random forests amongst others.
How do you define a decision tree?
Now that we have developed a basic understanding of the concept let us define it for you. A decision tree is a supervised machine learning algorithm that can be used to solve both classification-based and regression-based problems. Let us see how it is used for classification.
Let us assume there is a data set that we are currently working on. We create a 2D plan that can be divided into different areas such that the points in each area are designated to the same class. The divisions or splits are denoted by a unique character. This is a binary tree that we are working on here.
Now, there are different things of this decision tree that don’t have a prior representation but are created using the training data provided to us. These things include the number of nodes that this tree will have, its edge positioning, and its structure. We won’t be creating the tree from scratch here. We will only be moving forwards, considering that our tree is already there.
Now, how can we classify new input points? We just have to move down the tree to do it. While traversing, we will continue putting up a question about the data point on reaching every node. For instance, when we ask this question at the root node, the answer would either let us branch right or left. The general rule is if the question asked is true of the condition put up in the condition is met, we have to branch left. If it isn’t true, we have to branch right. If our condition takes us to a left node, we would know what class an input point has to be assigned.
When it comes to how a decision tree is demonstrated, there are a few things that should never be forgotten. There is no rule or necessity that says that we have to alternate between the two coordinates of a decision tree while traversing it. We can choose to go with just a single feature or dimension. We need to keep in mind that decision trees can be used with a data set of any dimension. We have taken 2D data in our example, but that doesn’t mean that decision trees are just for two-dimension data sets.
Have you ever been involved in a Twenty Questions contest? It is quite similar to how decision trees work. Let us find out how? The ultimate objective of the Twenty Questions game is to find out the object that the person answering the questions is thinking of while answering the questions. The questions can only be answered in a yes or a no.
As you move ahead in the game, you will know from the previous answers what specific questions to ask in order to get to the right answer before the game ends. A decision tree is your series of questions that helps you get to the ultimate answer by guiding you to ask more relevant questions.
Do you remember how you are directed to the person you want to speak to in a company by voicemail? You first speak to the computerized assistant and then press a series of buttons on your phones and enter a few details about your account before you reach the person you wanted to speak to in the first place. This could be a troublesome experience for you but this is how most companies use decision trees to help their customers reach the right department or talk to the right person. Also read 6 types of supervised learning you must know about.
How does a decision tree work?
Thinking about how to create a perfect decision tree? As we alluded to earlier, decision trees are a class of algorithms that are used to solve machine learning problems that belong to classification and regression types. It can be used for both categorical as well as continuous variables.
This algorithm has a simple way of moving forward – it partitions the data set or sample data into different sets of data with each data set grouped together sharing the same attributes. Decision trees employ a number of algorithms for different purposes – identify the split, most important variables, and the best result value that can produce more subdivisions going further.
Typically, the workflow of a decision tree involves the division of data into training and test data set, application of algorithms, and evaluation of model performance. Let’s understand how it works with a very simple example. Suppose we want to check whether a person is right for a job or not. This will be the root of the tree.
Now we move onto the features or attributes of the tree, which will constitute the internal nodes. Based on those attributes, decisions will be taken – the formation of branches of the tree. Let us make another assumption here. The parameter for a person considered right for the job is their experience of 5 or more years. The first division will take place on this parameter that we have just set.
We need more parameter sets for further splitting. These parameters could be about them belonging to a certain age group or not, carrying a certain degree or not, and so on. The results are depicted by the leaves of the tree, other than roots and branches. Leaves never split and depict the decisions. This tree will help you decide whether a candidate is right for the job or not.
As already mentioned, a decision tree has its own peculiar representation that enables it to solve a problem for us. It has roots, internal nodes, branches, and leaves, each serving a specific purpose or doing a specific job. These steps will help you make tree representation:
- The root of the tree features the optimized version of the best attribute
- Split the sample data into subsets using appropriate attributes. Ensure that the new subsets or groups of data don’t carry different values for the same attribute
- Repeat the above two steps until you have the leaves for every branch in your decision tree
Classification or regression tree (CART)
Let us take an example. Imagine we are given the task to classify job candidates on the basis of some pre-defined attributes to ensure that only deserving candidates are selected at the end of the process. The decision to select a candidate would depend on a real-time or possible event. All we need is a decision tree to find the right criteria for classification. The results would depend on how the classification is done.
Classification, as we all know, contains two steps. The first step involves building a random model on the sample data set. The second step involves prediction – the model trained in the first step is implemented to make a prediction regarding the response for given data.
Now, there are certain situations in which the target variable is a real number, or decisions are made on continuous data. You may be asked to make a prediction regarding the price of an item based on the cost of labour. Or you may be asked to decide the salary of a candidate based on their previous salary, skill set, experience, and other relevant information.
The value of the target value in these situations will either be some real value or value associated with a continuous data set. We will use the regression version of a decision tree to solve these problems. This tree will consider the observations made on an object’s features and train the model to make predictions and provide a continuous output that makes absolute sense.
Let us now talk about a few similarities and differences between classification and regression decision trees. Decision trees are used as classification models in situations where target variables are categorical in nature. The value that the training data set gets right at the culmination of a terminal node is equal to the value received when we take a mode of the observations for that particular section. In case any new observation is added to that section of the tree, we will replace it by the mode value, and then make the prediction.
On the other hand, decision trees are used as regression models when target variables are a part of a continuous data set. The value received at the same point that we discussed for classification trees, is the mean value of the observations in that section when it comes to regression trees.
There are a few similarities too. Both decision tree models use a recursive binary approach and divide independent variables into regions that don’t overlap with each other and are definite. In both these trees, division starts at the top of the tree, and the observations lie in one region. These observations split the variables into two branches. This division is a continuous process that gives way to a fully grown tree.
How to learn a CART model?
There are a few important things that you are required to do to create a CART model. These include choosing input variables as well as points of divisions in a way that the tree is properly constructed. The greedy algorithm that reduces the cost function is used to choose the input variables as well as the points of division.
The constriction of the tree is terminated with the help of the stopping criterion, which is defined in advance. The stopping criterion could mention anything, such as how many training instances are assigned to the tree’s leaf nodes.
1. Greedy algorithm: The input space has to be split correctly to build a binary tree. Recursive binary splitting is the greedy algorithm used for this purpose. It is a numerical method that involves lining up of different values. A cost function is then used to try and test several points of division. The division point with the minimum cost is chosen. This method is used to evaluate all points of division as well as input variables.
2. Tree pruning: Stopping criterion improves the performance of your decision tree. To make it even better, you can try pruning the tree after learning. The number of divisions a decision tree has tells a lot about how complex it is. Everyone prefers trees that are simpler than others. They don’t overfit data, and they are easily decipherable.
The best way to prune a tree is to look at every leaf node and find out how removing it will impact the tree. The removal of leaf nodes takes place when this action warrants a drop in the cost function. When you think that there is no way you can improve the performance further, you can stop this removal process. The pruning methods you can use include
3. Stopping criterion: The greedy splitting method mentioned that we talked about earlier has to have a stop command or condition to know when to stop. A common criterion is to take the number of instances that every leaf node has been assigned. If that number is reached, the division won’t happen, and that node will be considered the final one.
For example, let’s say that the predefined stopping criterion is mentioned as five instances. This number also says a lot about the exactness of the tree according to the training data. If it’s too precise or exact, it will result in overfitting, which means poor performance.
How to avoid overfitting in a decision tree?
Most decision trees are exposed to overfitting. We can build a decision tree that is capable of classifying the data in an ideal manner, or we can have a situation where we don’t have any attributes for the division. This won’t work too well with the testing data set; however, it would suit the training data set. You can follow any one of the two approaches that we are going to mention to avoid this situation.
You can either prune the tree if it is too large or stop its growth before it reaches that state of overfitting. In most cases, there is a limit defined to control the growth of the tress that mentions the depth, number of layers, and other things that it can have. The data set on which the tree needs to be trained will be divided into a test data set and a training data set. Both these data sets will have maximum depths on the basis of the training data set and will be tested against the testing data set. You can also use cross-validation along with this approach.
When you choose to prune the tree, you test the pruned editions of the tree against the original version. If the pruned tree does better than its version when it comes to testing against the test data set, leaves won’t be available to the tree as long as this situation persists.
Know more about: Decision Tree in R
Advantages of the decision trees approach
- It can be used with continuous as well as categorical data.
- It can deliver multiple outputs
- It can interpret precise results, and you can quantify and trust the reliability of trees
- With this method, you can explore data, find important variables, and find relationships between different variables for strengthening target variables and build new features in a lot less time.
- It is easy to understand and explain to others
- It is helpful in cleaning data. In comparison to other methods, it doesn’t take too much time as there is no impact of missing values and outliers on it after a certain point
- The efficiency and performance of decision trees are not affected by non-linear relationships between features
- It doesn’t take much time to prepare data as it doesn’t need missing value replacement, data normalization, and more.
- It is a non-parametric approach. It has nothing to do with designing and space arrangement of classifiers
Disadvantages of decision trees
- Some users can build decision trees that are too complex, even for their own liking. These trees don’t generalize the data as simpler trees do.
- Biased trees are often created due to the domination of certain classes. This is why it is very important to balance the sample data before it is used
- Sometimes these trees are not too stable. Data variations can result in the creation of a tree that doesn’t fit the bill. This anomaly is referred to as variance. It can be dealt with by using boosting and bagging.
- You can’t expect to get the best decision tree with greedy algorithms. To do away with this problem, you can train multiple trees.
This blog discusses all the important things that a learner needs to know about decision trees. After reading this blog, you will have a better understanding of the concept, and you will be in a better position to implement it in real life.
If you’re interested to learn more about machine learning & AI, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.