In this article, we will be discussing the very popular Gradient Descent Algorithm in Logistic Regression. We will look into what is Logistic Regression, then gradually move our way to the Equation for Logistic Regression, its Cost Function, and finally Gradient Descent Algorithm.
Top Machine Learning Courses & AI Courses Online
What is Logistic Regression?
Logistic Regression is simply a classification algorithm used to predict discrete categories, such as predicting if a mail is ‘spam’ or ‘not spam’; predicting if a given digit is a ‘9’ or ‘not 9’ etc. Now, by looking at the name, you must think, why is it named Regression?
The reason is, the idea of Logistic Regression was developed by tweaking a few elements of the basic Linear Regression Algorithm used in regression problems.
Logistic Regression can also be applied to Multi-Class (more than two classes) classification problems. Although, it is recommended to use this algorithm only for Binary Classification Problems.
Trending Machine Learning Skills
Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
Sigmoid Function
Classification problems are not Linear Function problems. The output is limited to certain discrete values, e.g., 0 and 1 for a binary classification problem. It does not make sense for a linear function to predict our output values as greater than 1, or lesser than 0. So we need a proper function to represent our output values.
Sigmoid Function solves our problem. Also known as the Logistic Function, it is an S-shaped function mapping any real value number to (0,1) interval, making it very useful in transforming any random function into a classification-based function. A Sigmoid Function looks like this:
Sigmoid Function
Now the mathematical form of the sigmoid function for parameterized vector and input vector X is:
(z) = 11+exp(-z)Â Â Â where z = TX
(z) will give us the probability that the output is 1. As we all know, the probability value ranges from 0 to 1. Now, this is not the output we want for our discrete-based(0 and 1 only) classification problem. So now we can compare the predicted probability with 0.5. If probability > 0.5, we have y=1. Similarly, if the probability is < 0.5, we have y=0.
Cost Function
Now that we have our discrete predictions, it is time to check whether our predictions are indeed correct or not. To do that, we have a Cost Function. Cost Function is merely the summation of all the errors made in the predictions across the entire dataset. Of course, we cannot use the Cost Function used in Linear Regression. So the new Cost Function for Logistic Regression is:
Don’t be afraid of the equation. It is very simple. For each iteration i, it is calculating the error we have made in our prediction, and then adding up all the errors to define our Cost Function J().
The two terms inside the bracket are actually for the two cases: y=0 and y=1. When y=0, the first term vanishes, and we are left with only the second term. Similarly, when y=1, the second term vanishes, and we are left with only the first term.
Gradient Descent Algorithm
We have successfully calculated our Cost Function. But we need to minimize the loss to make a good predicting algorithm. To do that, we have the Gradient Descent Algorithm.
Here we have plotted a graph between J()and . Our objective is to find the deepest point (global minimum) of this function. Now the deepest point is where the J()is minimum.
Two things are required to find the deepest point:
- Derivative – to find the direction of the next step.
- (Learning Rate) – magnitude of the next step
The idea is you first select any random point from the function. Then you need to compute the derivative of J()w.r.t. . This will point to the direction of the local minimum. Now multiply that resultant gradient with the Learning Rate. The Learning Rate has no fixed value, and is to be decided based on problems.
Now, you need to subtract the result from to get the new .
This update of should be simultaneously done for every (i).
Do these steps repeatedly until you reach the local or global minimum. By reaching the global minimum, you have achieved the lowest possible loss in your prediction.
Taking derivatives is simple. Just the basic calculus you must have done in your high school is enough. The major issue is with the Learning Rate( ). Taking a good learning rate is important and often difficult.
If you take a very small learning rate, each step will be too small, and hence you will take up a lot of time to reach the local minimum.
Now, if you tend to take a huge learning rate value, you will overshoot the minimum and never converge again. There is no specific rule for the perfect learning rate.
You need to tweak it to prepare the best model.
The equation for Gradient Descent is:
Repeat until convergence:
So we can summarize the Gradient Descent Algorithm as:
- Start with random
- Loop until convergence:
- Compute Gradient
- Update
- Return
Stochastic Gradient Descent Algorithm
Now, Gradient Descent Algorithm is a fine algorithm for minimizing Cost Function, especially for small to medium data. But when we need to deal with bigger datasets, Gradient Descent Algorithm turns out to be slow in computation. The reason is simple: it needs to compute the gradient, and update values simultaneously for every parameter,and that too for every training example.
So think about all those calculations! It’s massive, and hence there was a need for a slightly modified Gradient Descent Algorithm, namely – Stochastic Gradient Descent Algorithm (SGD).
The only difference SGD has with Normal Gradient Descent is that, in SGD, we don’t deal with the entire training instance at a single time. In SGD, we compute the gradient of the cost function for just a single random example at each iteration.
Now, doing so brings down the time taken for computations by a huge margin especially for large datasets. The path taken by SGD is very haphazard and noisy (although a noisy path may give us a chance to reach global minima).
But that is okay, since we do not have to worry about the path taken.
We only need to reach minimal loss at a faster time.
So we can summarize the Gradient Descent Algorithm as:
- Loop until convergence:
- Pick single data point ‘i’
- Compute Gradient over that single point
- Update
- Return
Mini-Batch Gradient Descent Algorithm
Mini-Batch Gradient Descent is another slight modification of the Gradient Descent Algorithm. It is somewhat in between Normal Gradient Descent and Stochastic Gradient Descent.
Mini-Batch Gradient Descent is just taking a smaller batch of the entire dataset, and then minimizing the loss on it.
This process is more efficient than both the above two Gradient Descent Algorithms. Now the batch size can be of-course anything you want.
But researchers have shown that it is better if you keep it within 1 to 100, with 32 being the best batch size.
Hence batch size = 32 is kept default in most frameworks.
- Loop until convergence:
- Pick a batch of ‘b’ data points
- Compute Gradient over that batch
- Update
- Return
Popular AI and ML Blogs & Free Courses
Conclusion
Now you have the theoretical understanding of Logistic Regression. You have learnt how to represent logistic function mathematically. You know how to measure the predicted error using the Cost Function.
You also know how you can minimize this loss using the Gradient Descent Algorithm.
Finally, you know which variation of the Gradient Descent Algorithm you should choose for your problem. upGrad provides a PG Diploma in Machine Learning and AI and a  Master of Science in Machine Learning & AI that may guide you toward building a career. These courses will explain the need for Machine Learning and further steps to gather knowledge in this domain covering varied concepts ranging from gradient descent algorithms to Neural Networks.Â
What is a gradient descent algorithm?
Gradient descent is an optimization algorithm for finding the minimum of a function. Suppose you want to find the minimum of a function f(x) between two points (a, b) and (c, d) on the graph of y = f(x). Then gradient descent involves three steps: (1) pick a point in the middle between two endpoints, (2) compute the gradient ∇f(x) (3) move in direction opposite to the gradient, i.e. from (c, d) to (a, b). The way to think about this is that the algorithm finds out the slope of the function at a point and then moves in the direction opposite to the slope.
What is sigmoid function?
The sigmoid function, or sigmoid curve, is a type of mathematical function that is non-linear and very similar in shape to the letter S (hence the name). It is used in operations research, statistics and other disciplines to model certain forms of real-valued growth. It is also used in a wide range of applications in computer science and engineering, especially in areas related to neural networks and artificial intelligence. Sigmoid functions are used as part of the inputs to reinforcement learning algorithms, which are based on artificial neural networks.
What is Stochastic Gradient Descent Algorithm?
Stochastic Gradient Descent is one of the popular variations of the classic Gradient Descent algorithm to find the local minima of the function. The algorithm randomly picks the direction in which the function will go next to minimize the value and the direction is repeated until a local-minima is reached. The objective is that by continuously repeating this process, the algorithm will converge to the global or local minimum of the function.