Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconArtificial Intelligencebreadcumb forward arrow iconGradient Descent in Logistic Regression [Explained for Beginners]

Gradient Descent in Logistic Regression [Explained for Beginners]

Last updated:
8th Jan, 2021
Views
Read Time
8 Mins
share image icon
In this article
Chevron in toc
View All
Gradient Descent in Logistic Regression [Explained for Beginners]

In this article, we will be discussing the very popular Gradient Descent Algorithm in Logistic Regression. We will look into what is Logistic Regression, then gradually move our way to the Equation for Logistic Regression, its Cost Function, and finally Gradient Descent Algorithm.

Top Machine Learning and AI Courses Online

What is Logistic Regression?

Logistic Regression is simply a classification algorithm used to predict discrete categories, such as predicting if a mail is ‘spam’ or ‘not spam’; predicting if a given digit is a ‘9’ or ‘not 9’ etc. Now, by looking at the name, you must think, why is it named Regression?

The reason is, the idea of Logistic Regression was developed by tweaking a few elements of the basic Linear Regression Algorithm used in regression problems.

Ads of upGrad blog

Logistic Regression can also be applied to Multi-Class (more than two classes) classification problems. Although, it is recommended to use this algorithm only for Binary Classification Problems.

Trending Machine Learning Skills

Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.

Sigmoid Function

Classification problems are not Linear Function problems. The output is limited to certain discrete values, e.g., 0 and 1 for a binary classification problem. It does not make sense for a linear function to predict our output values as greater than 1, or lesser than 0. So we need a proper function to represent our output values.

Sigmoid Function solves our problem. Also known as the Logistic Function, it is an S-shaped function mapping any real value number to (0,1) interval, making it very useful in transforming any random function into a classification-based function. A Sigmoid Function looks like this:

Sigmoid Function

source

Now the mathematical form of the sigmoid function for parameterized vector and input vector X is:

(z) = 11+exp(-z)      where z = TX

(z) will give us the probability that the output is 1. As we all know, the probability value ranges from 0 to 1. Now, this is not the output we want for our discrete-based(0 and 1 only) classification problem. So now we can compare the predicted probability with 0.5. If probability > 0.5, we have y=1. Similarly, if the probability is < 0.5, we have y=0.

Cost Function

Now that we have our discrete predictions, it is time to check whether our predictions are indeed correct or not. To do that, we have a Cost Function. Cost Function is merely the summation of all the errors made in the predictions across the entire dataset. Of course, we cannot use the Cost Function used in Linear Regression. So the new Cost Function for Logistic Regression is:

source

Don’t be afraid of the equation. It is very simple. For each iteration i, it is calculating the error we have made in our prediction, and then adding up all the errors to define our Cost Function J().

The two terms inside the bracket are actually for the two cases: y=0 and y=1. When y=0, the first term vanishes, and we are left with only the second term. Similarly, when y=1, the second term vanishes, and we are left with only the first term.

Gradient Descent Algorithm

We have successfully calculated our Cost Function. But we need to minimize the loss to make a good predicting algorithm. To do that, we have the Gradient Descent Algorithm.

source

Here we have plotted a graph between J()and . Our objective is to find the deepest point (global minimum) of this function. Now the deepest point is where the J()is minimum.

Two things are required to find the deepest point:

  • Derivative – to find the direction of the next step.
  • (Learning Rate) – magnitude of the next step

The idea is you first select any random point from the function. Then you need to compute the derivative of J()w.r.t. . This will point to the direction of the local minimum. Now multiply that resultant gradient with the Learning Rate. The Learning Rate has no fixed value, and is to be decided based on problems.

Now, you need to subtract the result from to get the new .

This update of should be simultaneously done for every (i).

Do these steps repeatedly until you reach the local or global minimum. By reaching the global minimum, you have achieved the lowest possible loss in your prediction.

Taking derivatives is simple. Just the basic calculus you must have done in your high school is enough. The major issue is with the Learning Rate( ). Taking a good learning rate is important and often difficult.

If you take a very small learning rate, each step will be too small, and hence you will take up a lot of time to reach the local minimum.

Now, if you tend to take a huge learning rate value, you will overshoot the minimum and never converge again. There is no specific rule for the perfect learning rate.

You need to tweak it to prepare the best model.

The equation for Gradient Descent is:

Repeat until convergence:

So we can summarize the Gradient Descent Algorithm as:

  1. Start with random
  2. Loop until convergence:
    1. Compute Gradient
    2. Update
  3. Return

Stochastic Gradient Descent Algorithm

Now, Gradient Descent Algorithm is a fine algorithm for minimizing Cost Function, especially for small to medium data. But when we need to deal with bigger datasets, Gradient Descent Algorithm turns out to be slow in computation. The reason is simple: it needs to compute the gradient, and update values simultaneously for every parameter,and that too for every training example.

So think about all those calculations! It’s massive, and hence there was a need for a slightly modified Gradient Descent Algorithm, namely – Stochastic Gradient Descent Algorithm (SGD).

The only difference SGD has with Normal Gradient Descent is that, in SGD, we don’t deal with the entire training instance at a single time. In SGD, we compute the gradient of the cost function for just a single random example at each iteration.

Now, doing so brings down the time taken for computations by a huge margin especially for large datasets. The path taken by SGD is very haphazard and noisy (although a noisy path may give us a chance to reach global minima).

But that is okay, since we do not have to worry about the path taken.

We only need to reach minimal loss at a faster time.

So we can summarize the Gradient Descent Algorithm as:

  1. Loop until convergence:
    1. Pick single data point ‘i’
    2. Compute Gradient over that single point
    3. Update
  2. Return

Mini-Batch Gradient Descent Algorithm

Mini-Batch Gradient Descent is another slight modification of the Gradient Descent Algorithm. It is somewhat in between Normal Gradient Descent and Stochastic Gradient Descent.

Mini-Batch Gradient Descent is just taking a smaller batch of the entire dataset, and then minimizing the loss on it.

This process is more efficient than both the above two Gradient Descent Algorithms. Now the batch size can be of-course anything you want.

But researchers have shown that it is better if you keep it within 1 to 100, with 32 being the best batch size.

Hence batch size = 32 is kept default in most frameworks.

  1. Loop until convergence:
    1. Pick a batch of ‘b’ data points
    2. Compute Gradient over that batch
    3. Update
  2. Return

Popular AI and ML Blogs & Free Courses

Conclusion

Ads of upGrad blog

Now you have the theoretical understanding of Logistic Regression. You have learnt how to represent logistic function mathematically. You know how to measure the predicted error using the Cost Function.

You also know how you can minimize this loss using the Gradient Descent Algorithm.

Finally, you know which variation of the Gradient Descent Algorithm you should choose for your problem. upGrad provides a PG Diploma in Machine Learning and AI and a  Master of Science in Machine Learning & AI that may guide you toward building a career. These courses will explain the need for Machine Learning and further steps to gather knowledge in this domain covering varied concepts ranging from gradient descent algorithms to Neural Networks. 

Profile

Pavan Vadapalli

Blog Author
Director of Engineering @ upGrad. Motivated to leverage technology to solve problems. Seasoned leader for startups and fast moving orgs. Working on solving problems of scale and long term technology strategy.
Get Free Consultation

Selectcaret down icon
Select Area of interestcaret down icon
Select Work Experiencecaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Machine Learning Course

Frequently Asked Questions (FAQs)

1What is a gradient descent algorithm?

Gradient descent is an optimization algorithm for finding the minimum of a function. Suppose you want to find the minimum of a function f(x) between two points (a, b) and (c, d) on the graph of y = f(x). Then gradient descent involves three steps: (1) pick a point in the middle between two endpoints, (2) compute the gradient ∇f(x) (3) move in direction opposite to the gradient, i.e. from (c, d) to (a, b). The way to think about this is that the algorithm finds out the slope of the function at a point and then moves in the direction opposite to the slope.

2What is sigmoid function?

The sigmoid function, or sigmoid curve, is a type of mathematical function that is non-linear and very similar in shape to the letter S (hence the name). It is used in operations research, statistics and other disciplines to model certain forms of real-valued growth. It is also used in a wide range of applications in computer science and engineering, especially in areas related to neural networks and artificial intelligence. Sigmoid functions are used as part of the inputs to reinforcement learning algorithms, which are based on artificial neural networks.

3What is Stochastic Gradient Descent Algorithm?

Stochastic Gradient Descent is one of the popular variations of the classic Gradient Descent algorithm to find the local minima of the function. The algorithm randomly picks the direction in which the function will go next to minimize the value and the direction is repeated until a local-minima is reached. The objective is that by continuously repeating this process, the algorithm will converge to the global or local minimum of the function.

4

Explore Free Courses

Suggested Blogs

Artificial Intelligence course fees
5459
Artificial intelligence (AI) was one of the most used words in 2023, which emphasizes how important and widespread this technology has become. If you
Read More

by venkatesh Rajanala

29 Feb 2024

Artificial Intelligence in Banking 2024: Examples &#038; Challenges
6199
Introduction Millennials and their changing preferences have led to a wide-scale disruption of daily processes in many industries and a simultaneous g
Read More

by Pavan Vadapalli

27 Feb 2024

Top 9 Python Libraries for Machine Learning in 2024
75656
Machine learning is the most algorithm-intense field in computer science. Gone are those days when people had to code all algorithms for machine learn
Read More

by upGrad

19 Feb 2024

Top 15 IoT Interview Questions &#038; Answers 2024 – For Beginners &#038; Experienced
64482
These days, the minute you indulge in any technology-oriented discussion, interview questions on cloud computing come up in some form or the other. Th
Read More

by Kechit Goyal

19 Feb 2024

Data Preprocessing in Machine Learning: 7 Easy Steps To Follow
153068
Summary: In this article, you will learn about data preprocessing in Machine Learning: 7 easy steps to follow. Acquire the dataset Import all the cr
Read More

by Kechit Goyal

18 Feb 2024

Artificial Intelligence Salary in India [For Beginners &#038; Experienced] in 2024
908787
Artificial Intelligence (AI) has been one of the hottest buzzwords in the tech sphere for quite some time now. As Data Science is advancing, both AI a
Read More

by upGrad

18 Feb 2024

24 Exciting IoT Project Ideas &#038; Topics For Beginners 2024 [Latest]
760639
Summary: In this article, you will learn the 24 Exciting IoT Project Ideas & Topics. Take a glimpse at the project ideas listed below. Smart Agr
Read More

by Kechit Goyal

18 Feb 2024

Natural Language Processing (NLP) Projects &amp; Topics For Beginners [2023]
107776
What are Natural Language Processing Projects? NLP project ideas advanced encompass various applications and research areas that leverage computation
Read More

by Pavan Vadapalli

17 Feb 2024

45+ Interesting Machine Learning Project Ideas For Beginners [2024]
328421
Summary: In this Article, you will learn Stock Prices Predictor Sports Predictor Develop A Sentiment Analyzer Enhance Healthcare Prepare ML Algorith
Read More

by Jaideep Khare

16 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon