Programs

5 Types of Classification Algorithms in Machine Learning [2021]

Introduction

Machine learning is one of the most important topics in Artificial Intelligence. It is further divided into Supervised and Unsupervised learning which can be related to labelled and unlabeled data analysis or data prediction. In Supervised Learning we have two more types of business problems called Regression and Classification.

Classification is a machine learning algorithm where we get the labeled data as input and we need to predict the output into a class. If there are two classes, then it is called Binary Classification. If there are more than two classes, then it is called Multi Class Classification. In real world scenarios we tend to see both types of Classification.

In this article we will investigate a few types of Classification Algorithms along with their pros and cons. There are so many classification algorithms available but let us focus on the below 5 algorithms:

  1. Logistic Regression
  2. K Nearest Neighbor
  3. Decision trees
  4. Random Forest
  5. Support vector Machines

1. Logistic Regression

Even though the name suggests Regression it is a Classification Algorithm. Logistic Regression is a statistical method for classifying data in which there are one or more independent variables or features that determine an outcome which is measured with a variable (TARGET) that has two or more classes. Its main goal is to find the best fitting model to describe the relationship between the Target variable and independent variables.

Pros

1) Easy to implement, interpret and efficient to train as it does not make any assumptions and is fast at Classifying.

2) Can be used for Multi Class Classification.

3) It is less prone to over-fitting but does overfit in high dimensional datasets.

Cons

1) Overfits when observations are lesser than features.

2) Only works with discrete functions.

3) Non-linear problems cannot be solved.

4) Tough to learn complex patterns and usually neural networks outperform them.

 2. K Nearest Neighbor

K-nearest neighbors (KNN) algorithm uses the technique ‘feature similarity’ or ‘nearest neighbors’ to predict the cluster that a new data point fall into. Below are the few steps based on which we can understand the working of this algorithm better

Step 1 − For implementing any algorithm in Machine learning, we need a cleaned data set ready for modelling. Let’s assume that we already have a cleaned dataset which has been split into training and testing data set.

Step 2 − As we already have the data sets ready, we need to choose the value of K (integer) which tells us how many nearest data points we need to take into consideration to implement the algorithm. We can get to know how to determine the k value in the later stages of the article.

Step 3 − This step is an iterative one and needs to be applied for each data point in the dataset

  1. Calculate the distance between test data and each row of training data using any of the distance metric
  2. Euclidean distance
  3. Manhattan distance
  4. Minkowski distance
  5. Hamming distance.

 Many data scientists tend to use the Euclidean distance, but we can get to know the significance of each one in the later stage of this article.

We need to sort the data based on the distance metric that we have used in the above step.

Choose the top K rows in the transformed sorted data.

Then it will assign a class to the test point based on the most frequent class of these rows.

Step 4 – End

Pros

  1. Easy to use, understand and interpret.
  2. Quick calculation time.
  3. No assumptions about data.
  4. High accuracy of predictions.
  5. Versatile – Can be used for both Classification and Regression Business Problems.
  6. Can be used for Multi Class Problems as well.
  7. We have only one Hyper parameter to tweak at Hyperparameter Tuning step.

Cons

  1. Computationally expensive and requires high memory as the algorithm stores all the training data.
  2. The algorithm gets slower as the variables increase.
  3. It is very Sensitive to irrelevant features.
  4. Curse of Dimensionality.
  5. Choosing the optimal value of K.
  6. Class Imbalanced dataset will cause problem.
  7. Missing values in the data also causes problem.

Read: Machine Learning Project Ideas

3. Decision Trees

Decision trees can be used for both Classification and Regression as it can handle both numerical and categorical data. It breaks down the data set into smaller and smaller subsets or nodes as the tree gets developed. Decision tree has output with decision and leaf nodes where a decision node has two or more branches while a leaf node represents a decision. The topmost node that corresponds to the best predictor is called the root node.

Pros

  1. Simple to understand
  2. Easy Visualization
  3. Less data Interpretation
  4. Handles both numerical and categorical data.

Cons

  1. Sometimes do not generalize well
  2. Unstable to changes in input data

4. Random forests

Random forests are an ensemble learning method that can be used for classification and regression. It works by constructing several decision trees and outputs the results by taking the mean of all decision trees in Regression or Majority voting in Classification problems. You can get to know from the name itself that a group of trees is called a Forest.

Pros

  1. Can handle large datasets.
  2. Will output the importance of variables.
  3. Can handle missing values.

Cons

  1. It is a black box algorithm.
  2. Slow real time prediction and complex algorithms.

5. Support vector machines

 Support vector machine is a representation of the data set as points in space separated into categories by a clear gap or line that is as far as possible. The new  data points are now mapped into that same space and classified to belong to a category based on which side of the line or separation they fall.

Pros

  1. Works best in High dimensional spaces.
  2. Uses a subset of training data points in decision function which makes it a memory efficient algorithm.

Cons

  1. Will not provide probability estimates.
  2. Can calculate probability estimates using cross validation but it is time consuming.

Also Read: Career in Machine Learning

Conclusion

In this article we have discussed regarding the 5 Classification algorithms, their brief definitions, pros and cons. These are only a few algorithms that we have covered but there are more valuable algorithms such as Naïve Bayes, Neural Networks, Ordered Logistic Regression. One cannot tell which algorithm works well for which problem, so that best practice is to try out a few and select the final model based on evaluation metrics.

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Lead the AI Driven Technological Revolution

PG DIPLOMA IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
Learn More

Leave a comment

Your email address will not be published.

Accelerate Your Career with upGrad

Our Popular Machine Learning Course

×