Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconArtificial Intelligencebreadcumb forward arrow icon7 Most Used Machine Learning Algorithms in Python You Should Know About

7 Most Used Machine Learning Algorithms in Python You Should Know About

Last updated:
4th Mar, 2021
Views
Read Time
12 Mins
share image icon
In this article
Chevron in toc
View All
7 Most Used Machine Learning Algorithms in Python You Should Know About

Machine Learning is a branch of Artificial Intelligence (AI) which deals with the computer algorithms being used on any data. It focuses on automatically learning from the data being fed into it and it gives us results by improving on the previous predictions every time. 

Top Machine Learning and AI Courses Online

Top Machine Learning Algorithms Used in Python

Below are some of the top machine learning algorithms used in Python, along with code snippets shows their implementation and visualizations of classification boundaries.

1. Linear Regression

Linear regression is one of the most commonly used supervised machine learning technique. As its name suggests, this regression tries to model the relationship between two variables using a linear equation and fitting that line to the observed data. This technique is used to estimate real continuous values like total sales made, or cost of houses.

Ads of upGrad blog

The line of best fit is also called the regression line. It is given by the following equation:

Y = a*X + b

where Y is the dependent variable, a is the slope, X is the independent variable and b is the intercept value. The coefficients a and b are derived by minimizing the square of the difference of that distance between the various data points and the regression line equation.

# synthetic dataset for simple regression

from sklearn.datasets import make_regression

plt.figure()

plt.title( ‘Sample regression problem with one input variable’ )

X_R1, y_R1 = make_regression( n_samples = 100, n_features = 1, n_informative = 1, bias = 150.0, noise = 30, random_state = 0 )

plt.scatter( X_R1, y_R1, marker = ‘o’, s = 50 )

plt.show()

Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.

from sklearn.linear_model import LinearRegression

X_train, X_test, y_train, y_test = train_test_split( X_R1, y_R1,

                                                   random_state = 0 )

linreg = LinearRegression().fit( X_train, y_train )

print( ‘linear model coeff (w): {}’.format( linreg.coef_ ) )

print( ‘linear model intercept (b): {:.3f}’z.format( linreg.intercept_ ) )

print( ‘R-squared score (training): {:.3f}’.format( linreg.score( X_train, y_train ) ) )

print( ‘R-squared score (test): {:.3f}’.format( linreg.score( X_test, y_test ) ) )

Output

linear model coeff (w): [ 45.71]

linear model intercept (b): 148.446

R-squared score (training): 0.679

R-squared score (test): 0.492

The following code will draw the fitted regression line on the plot of our data points.

plt.figure( figsize = ( 5, 4 ) )

plt.scatter( X_R1, y_R1, marker = ‘o’, s = 50, alpha = 0.8 )

plt.plot( X_R1, linreg.coef_ * X_R1 + linreg.intercept_, ‘r-‘ )

plt.title( ‘Least-squares linear regression’ )

plt.xlabel( ‘Feature value (x)’ )

plt.ylabel( ‘Target value (y)’ )

plt.show()

Preparing a Common Dataset For Exploring Classification Techniques

The following data is going to be used to show the various classification algorithms which are most commonly used in machine learning in Python.

The UCI Mushroom Data Set is stored in mushrooms.csv.

%matplotlib notebook

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

from sklearn.decomposition import PCA

from sklearn.model_selection import train_test_split

df = pd.read_csv( ‘readonly/mushrooms.csv’ )

df2 = pd.get_dummies( df )

df3 = df2.sample( frac = 0.08 )

X = df3.iloc[:, 2:]

y = df3.iloc[:, 1]

pca = PCA( n_components = 2 ).fit_transform( X )

X_train, X_test, y_train, y_test = train_test_split( pca, y, random_state = 0 )

plt.figure( dpi = 120 )

plt.scatter( pca[y.values == 0, 0], pca[y.values == 0, 1], alpha = 0.5, label = ‘Edible’, s = 2 )

plt.scatter( pca[y.values == 1, 0], pca[y.values == 1, 1], alpha = 0.5, label = ‘Poisonous’, s = 2 )

plt.legend()

plt.title( ‘Mushroom Data Set\nFirst Two Principal Components’ )

plt.xlabel( ‘PC1’ )

plt.ylabel( ‘PC2’ )

plt.gca().set_aspect( ‘equal’ )

We will use the function defined below to get the decision boundaries of the different classifiers we’ll use on the mushroom dataset.

def plot_mushroom_boundary( X, y, fitted_model ):

    plt.figure( figsize = (9.8, 5), dpi = 100 )

    for i, plot_type in enumerate( [‘Decision Boundary’, ‘Decision Probabilities’] ):

        plt.subplot( 1, 2, i + 1 )

        mesh_step_size = 0.01  # step size in the mesh

        x_min, x_max = X[:, 0].min() – .1, X[:, 0].max() + .1

        y_min, y_max = X[:, 1].min() – .1, X[:, 1].max() + .1

        xx, yy = np.meshgrid( np.arange( x_min, x_max, mesh_step_size ), np.arange( y_min, y_max, mesh_step_size ) )

        if i == 0:

            Z = fitted_model.predict( np.c_[xx.ravel(), yy.ravel()] )

        else:

            try:

                Z = fitted_model.predict_proba( np.c_[xx.ravel(), yy.ravel()] )[:, 1]

            except:

                plt.text( 0.4, 0.5, ‘Probabilities Unavailable’, horizontalalignment = ‘center’, verticalalignment = ‘center’,  transform = plt.gca().transAxes, fontsize = 12 )

                plt.axis( ‘off’ )

                break

        Z = Z.reshape( xx.shape )

        plt.scatter( X[y.values == 0, 0], X[y.values == 0, 1], alpha = 0.4, label = ‘Edible’, s = 5 )

        plt.scatter( X[y.values == 1, 0], X[y.values == 1, 1], alpha = 0.4, label = ‘Posionous’, s = 5 )

        plt.imshow( Z, interpolation = ‘nearest’, cmap = ‘RdYlBu_r’, alpha = 0.15, extent = ( x_min, x_max, y_min, y_max ), origin = ‘lower’ )

        plt.title( plot_type + ‘\n’ + str( fitted_model ).split( ‘(‘ )[0] + ‘ Test Accuracy: ‘ + str( np.round( fitted_model.score( X, y ), 5 ) ) )

        plt.gca().set_aspect( ‘equal’ );

    plt.tight_layout()

    plt.subplots_adjust( top = 0.9, bottom = 0.08, wspace = 0.02 )

2. Logistic Regression

Unlike linear regression, logistic regression deals with the estimation of discrete values (0/1 binary values, true/false, yes/no). This technique is also called logit regression. This is because it predicts the probability of an event by using a logit function to train the given data. It’s value always lies between 0 and 1 (since it is calculating a probability).

The log odds of the results is constructed as a linear combination of the predictor variable as follows:

odds = p / (1 – p) = probability of event occurring or probability of event not occurring

ln( odds ) = ln( p / (1 – p) )

logit( p ) = ln( p / (1 – p) ) = b0 + b1X1 + b2X2 + b3X3 + … + bkXk

where p is the probability of presence of a characteristic.

from sklearn.linear_model import LogisticRegression

model = LogisticRegression()

model.fit( X_train, y_train )

plot_mushroom_boundary( X_test, y_test, model )

Get artificial intelligence certification online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.

3. Decision Tree

This is a very popular algorithm that can be used to classify both continuous and discrete variables of data. At every step, the data is split into more than one homogenous sets based on some splitting attribute/conditions.

from sklearn.tree import DecisionTreeClassifier

model = DecisionTreeClassifier( max_depth = 3 )

model.fit( X_train, y_train )

plot_mushroom_boundary( X_test, y_test, model )

4. SVM

SVM is short for Support Vector Machines. Here the basic idea is the classify the data points by using hyperplanes for separation. The goal is the find out such a hyperplane that has the maximum distance (or margin) between the data points of both the classes or categories.

We choose the plane in such a way to take care of classifying unknown points in the future with the highest confidence. SVMs are famously used because they give high accuracy while taking up very less computational power. SVMs can also be used for regression problems.

from sklearn.svm import SVC

model = SVC( kernel = ‘linear’ )

model.fit( X_train, y_train )

plot_mushroom_boundary( X_test, y_test, model )

Check out all trending Python tutorial concepts in 2024.

5. Naïve Bayes

As the name suggests, Naïve Bayes algorithm is a supervised learning algorithm based on the Bayes Theorem. Bayes Theorem uses conditional probabilities to give you the probability of an event based on some given knowledge.

Where,

P (A | B): The conditional probability that event A occurs, given that event B has already occurred. (Also called posterior probability)

P(A): Probability of event A.

P(B): Probability of event B.

P (B | A): The conditional probability that event B occurs, given that event A has already occurred.

Why is this algorithm named Naïve, you ask? This is because it assumes that all occurrences of events are independent of each other. So each feature separately defines the class a data point belongs to, without having any dependencies among themselves. Naïve Bayes is the best choice for text categorizations. It will work sufficiently well with even small amounts of training data.

from sklearn.naive_bayes import GaussianNB

model = GaussianNB()

model.fit( X_train, y_train )

plot_mushroom_boundary( X_test, y_test, model )

5. KNN

KNN stands for K-Nearest Neighbours. It is a very wide used supervised learning algorithm which classifies the test data according to its similarities with the previously classified training data. KNN does not classify all data points during training. Instead, it just stores the dataset and when it gets any new data, it then classifies those data points based on their similarities. It does so by calculating the Euclidean distance of the K number of nearest neighbours (here, n_neighbors) of that data point.

from sklearn.neighbors import KNeighborsClassifier

model = KNeighborsClassifier( n_neighbors = 20 )

model.fit( X_train, y_train )

plot_mushroom_boundary( X_test, y_test, model )

6. Random Forest

Random forest is a very simple and diverse machine learning algorithm that uses a supervised learning technique. As you can sort of guess from the name, random forest consists of a large number of decision trees, acting as an ensemble. Each decision tree will figure out the output class of the data points and the majority class will be chosen as the model’s final output. The idea here is that more trees working on the same data will tend to be more accurate in results than individual trees.

from sklearn.ensemble import RandomForestClassifier

model = RandomForestClassifier()

model.fit( X_train, y_train )

plot_mushroom_boundary( X_test, y_test, model )

7. Multi-Layer Perceptron

Multi-Layer Perceptron (or MLP) is a very fascinating algorithm coming under the branch of deep learning. More specifically, it belongs to the class of feed-forward artificial neural networks (ANN). MLP forms a network of multiple perceptrons with at least three layers: an input layer, output layer and hidden layer(s). MLPs are able to distinguish between data that are non-linearly separable.

Also Read: Python Project Ideas & Topics

Each neuron in the hidden layers uses an activation function to proceed to the next layer. Here, the backpropagation algorithm is used to actually tune the parameters and hence train the neural network. It can mostly be used for simple regression problems.

from sklearn.neural_network import MLPClassifier

model = MLPClassifier()

model.fit( X_train, y_train )

plot_mushroom_boundary( X_test, y_test, model )

Popular AI and ML Blogs & Free Courses

Conclusion

Ads of upGrad blog

We can conclude that different machine learning algorithms yield different decision boundaries and hence different accuracy results in classifying the same dataset.

There is no way to declare anyone algorithm as the best algorithm for all kinds of data in general. Machine learning requires rigorous trial and errors for various algorithms to determine what works best for each dataset separately. The list of ML algorithms doesn’t obviously end here. There is a vast sea of other techniques which are waiting to be explored in the Scikit-Learn library of Python. Go ahead and train your datasets using all of those and have fun!

If you’re interested to learn more about decision trees, machine learning, check out IIIT-B & upGrad’s Executive PG Programme in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Profile

Pavan Vadapalli

Blog Author
Director of Engineering @ upGrad. Motivated to leverage technology to solve problems. Seasoned leader for startups and fast moving orgs. Working on solving problems of scale and long term technology strategy.
Get Free Consultation

Selectcaret down icon
Select Area of interestcaret down icon
Select Work Experiencecaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Machine Learning Course

Frequently Asked Questions (FAQs)

1What are the prime assumptions of linear regression?

There are 4 essential assumptions for linear regression: linearity, homoscedasticity, independence, and Normality. Linearity means that the relationship between the independent variable (X) and the mean of the dependent variable (Y) is considered linear when we use linear regression. Homoscedasticity means that the variance in errors of the residual points of the graph is presumed to be constant. Independence refers to all the observations from the input data to be considered as independent from each other. Normality means that the input data distribution can be uniform or non-uniform, but it is presumed to be uniformly distributed in the case of linear regression.

2What are the differences between a Decision tree and Random Forest?

The decision tree implements its decision-making process, using a tree-like structure that represents the possible outcomes for specific actions. Random forest uses a bundle of such decision trees to analyze the data. By this process, more data will be used by Random forest, but it helps to prevent overfitting and gives accurate results. There is a scope of overfitting in a decision tree algorithm and can provide less accurate results. A decision tree is easy to interpret as it requires fewer computations, whereas a random forest is hard to interpret due to its complex analyses.

3What are some standard libraries used for machine learning algorithms in Python?

Python has replaced almost all other languages in machine learning due to the availability of a vast number of libraries and easy syntax rules. There are many Python libraries for machine learning such as Numpy, Scipy, Scikit-learn, Theono, TensorFlow, PyTorch, Matplotlib, Keras, Pandas, etc. Using the functions from these libraries saves a lot of time writing algorithms for each task; the processes are less time-consuming and provide efficient results. These libraries have applications like matrix processing, optimization problems, data mining, statistical analysis, computations involving tensors, object detection, neural networks, and many more.

4

Explore Free Courses

Suggested Blogs

15 Interesting MATLAB Project Ideas & Topics For Beginners [2024]
82457
Diving into the world of engineering and data science, I’ve discovered the potential of MATLAB as an indispensable tool. It has accelerated my c
Read More

by Pavan Vadapalli

09 Jul 2024

5 Types of Research Design: Elements and Characteristics
47126
The reliability and quality of your research depend upon several factors such as determination of target audience, the survey of a sample population,
Read More

by Pavan Vadapalli

07 Jul 2024

Biological Neural Network: Importance, Components & Comparison
50612
Humans have made several attempts to mimic the biological systems, and one of them is artificial neural networks inspired by the biological neural net
Read More

by Pavan Vadapalli

04 Jul 2024

Production System in Artificial Intelligence and its Characteristics
86790
The AI market has witnessed rapid growth on the international level, and it is predicted to show a CAGR of 37.3% from 2023 to 2030. The production sys
Read More

by Pavan Vadapalli

03 Jul 2024

AI vs Human Intelligence: Difference Between AI & Human Intelligence
112983
In this article, you will learn about AI vs Human Intelligence, Difference Between AI & Human Intelligence. Definition of AI & Human Intelli
Read More

by Pavan Vadapalli

01 Jul 2024

Career Opportunities in Artificial Intelligence: List of Various Job Roles
89547
Artificial Intelligence or AI career opportunities have escalated recently due to its surging demands in industries. The hype that AI will create tons
Read More

by Pavan Vadapalli

26 Jun 2024

Gini Index for Decision Trees: Mechanism, Perfect & Imperfect Split With Examples
70805
As you start learning about supervised learning, it’s important to get acquainted with the concept of decision trees. Decision trees are akin to
Read More

by MK Gurucharan

24 Jun 2024

Random Forest Vs Decision Tree: Difference Between Random Forest and Decision Tree
51730
Recent advancements have paved the growth of multiple algorithms. These new and blazing algorithms have set the data on fire. They help in handling da
Read More

by Pavan Vadapalli

24 Jun 2024

Basic CNN Architecture: Explaining 5 Layers of Convolutional Neural Network
270717
Introduction In the last few years of the IT industry, there has been a huge demand for once particular skill set known as Deep Learning. Deep Learni
Read More

by MK Gurucharan

21 Jun 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon