When it comes to machine learning, more specifically classification, logistic regression is perhaps the most straightforward and most widely used algorithm. Since logistic regression is very easy to understand and implement, this algorithm is perfect for beginners and the people just starting their machine learning or data science journey.
Although the name logistic regression might sound like the algorithm that one might use to implement regression, the truth is far from it. Logistic regression, because of its nuances, is more fit to actually classify instances into well-defined classes than actually perform regression tasks.
In a nutshell, this algorithm takes linear regression output and applies an activation function before giving us the result. The activation function which logistic regression uses is that of sigmoid function (also known as a logistic function). Adhering to a sigmoid function’s properties, instead of providing continuous values, it just gives a number in the range of zero and one. After setting a threshold value, making classification from the output of logistic regression becomes a breeze.
We all know how the field of data science and machine learning is evolving. More opportunities are being created daily. So, in this competitive cut-throat world, making sure you have the right knowledge is key to ensuring a good placement in the company of your dreams. To aid you in this endeavor of yours, we have prepared a list of logistic regression interview questions that should help you prepare for the journey to become a professional data scientist or a machine learning professional.
Table of Contents
Logistic Regression Interview Questions & Answers
Q1. Answer using either TRUE or FALSE. Is logistic regression a type of a supervised machine learning algorithm?
Ans. Yes, the answer to this question would be TRUE because, indeed, logistic regression is a supervised machine learning algorithm. The simple reason why lies in the way this algorithm works. To get output from logistic regression, you will have to feed it with data first.
You will have to provide the instances and the correct labeling of these instances for it to be able to learn from them and make accurate predictions. A supervised machine learning algorithm would need both a target variable (Y) and the class instances or the variable used to provide input information (X) to be able to train and make predictions successfully.
Q2. Answer using either TRUE or FALSE. Is logistic regression mainly used for classification?
Ans. Yes, the answer to this question is TRUE. Indeed, logistic regression is primarily used for classification tasks rather than performing actual regression. We use linear regression for regression. Due to the similarity between the two, it is easy to get confused. Do not make this mistake. In logistic regression, we use the logistic function, which is nothing but a sigmoid activation function, which makes classification tasks much more comfortable.
Q3. Answer this question using TRUE or FALSE. Can a neural network be implemented, which mimics the behavior of a logistic regression algorithm?
Ans. Yes, the answer would be TRUE. Neural networks are also known as universal approximators. They can be used to mimic almost any machine learning algorithm. To put things into perspective, if you are using the Keras API of TensorFlow 2.0, all you would have to would be to add one layer into the sequential model and make this layer with a sigmoid activation function.
Q4. Answer this question using either TRUE or FALSE. Can we use logistic regression to solve a multi-class classification problem?
Ans. The short answer would be TRUE. The long answer, however, would have you thinking a little. There is no way in which you can implement a multi-class classification from just using one single logistic regression model. You will need to either use a neural network with a softmax activation function or use a complex machine-learning algorithm to predict many classes of your input variable successfully.
However, there is one way in which you can actually use the logistic regression to solve a multi-class classification problem. That would be by using a one vs. all approach. You will need to train n classifiers (where n is the number of classes), each of them predicting just one class. So, in a case of three-class classification (let us say A, B, and C), you will need to train two classifiers one to predict A and not A, another one to predict B and not B, and the final classifier predicting C and not C. Then you will have to take the outputs from all these three models integrate them together to be able to do a multi-class classification using nothing but logistic regression.
Q5. Choose one of the options from the list below. What is the underlying method which is used to fit the training data in the algorithm of logistic regression?
- Jaccard Distance
- Maximum Likelihood
- Least Square error
- None of the options which are mentioned above.
Ans. The answer is B. It is easy to select option C, which is the Least Square error because this is the same method that is used in linear regression. However, in logistic regression, we do not use the Least square approximation to fit the training instances into the model; we use Maximum Likelihood instead.
Checkout: Machine Learning Project Ideas
Q6. Choose one of the options from the list below. Which metric would we not be able to use to measure the correctness of a logistic regression model?
- The area under the receiver operating characteristics curve (or AUC-ROC score)
- Mean squared error (or MSE)
Ans. The correct option you should choose is C, i.e., Mean Squared Error, or MSE. Since the logistic regression algorithm is actually a classification algorithm rather than a basic regression algorithm, we cannot use the Meas Square Error to determine the performance of the logistic regression model that we wrote. The main reason is because of the output that we receive from the model and the inability to assign a meaningful numeric value to a class instance.
Q7. Choose one of the options from the list below. AIC happens to be an excellent metric to judge the performance of the logistic regression model. AIC is very similar to the R-squared method that is used to determine the performance of a linear regression algorithm. What is actually true about this AIC?
- The model with a low AIC score is generally preferred.
- The model which has s huge AIC score is actually preferred.
- The choice of the model just from the basis of the AIC score highly depends on the situation.
- None of the options which are mentioned above.
Ans. The model which has the least value of AIC is preferred. So, the answer to the question would be option A. The main reason why we choose the model with the lowest possible value of AIC is because the penalty, which is added to regulate the performance of the model, actually does not encourage the fit to be over. Yes, the AIC or Akaike Information Criterion is that metric in which the lower the value, the better the fit.
In practice, we prefer the models which are neither under fitted (meaning it cannot generalize well because the model which we have chosen is not complex enough to find the intricacies present in the data) nor overfitting (meaning the model has fitted perfectly to the training data and it has lost the ability to make more general predictions). So, we choose a reasonably low score to avoid both under and overfitting.
Q8. Answer using either TRUE or FALSE. Do we need to standardize the values present in the feature columns before we feed the data into a training logistic regression model?
Ans. No, we do not need to standardize the values present in the feature space, which we have to use to train the logistic regression model. So, the answer to this question would be FALSE. We choose to standardize all our values to help the function (usually gradient descent), which is responsible for making the algorithm converge on a value. Since this algorithm is relatively simple, it does not need the amounts to be scaled for it actually to have a significant difference in its performance.
Q9. Choose one of the options from the list below. Which is the technique we use to perform the task of variable selection?
- Ridge Regression
- LASSO regression
- None of the options which are mentioned
- Both LASSO and Ridge Regression
Ans. The answer to this question is B. LASSO regression. The reason is simple, the l2 penalty, which is incurred in the LASSO regression function, has the ability to make the coefficient of some features to be zero. Since the coefficient is zero, meaning they will not have any effect in the final outcome of the function. This means these variables are not as important as we thought them to be, and in this way, with the help of LASSO regression, we can perform a variable selection.
Q10. Choose one of the options from the list below. Assume that you have a fair coin in your possession with the aim to find out the odds of getting heads. What would be your calculated odds?
- Would the odds of getting head be 0
- Would the odds of getting head be 1
- Would the odds of getting head be 0.5
- None of the options which are mentioned above.
Ans. To successfully answer this question, you would need to understand the meaning and definition of odds. Odds are actually defined as the ratio of two probabilities—the probability of happening to the likelihood of not happening of any particular event. In the case of any coin, which is fair, the possibility of head and probability of not heads are the same. So, the odds of getting heads is one.
Q11. Choose the correct answer from the options below. The logit function is defined as the log of the odds function. What do you think the range of this logit function be in the domain of [0,1]?
- (-infinity, +infinity)
- (0, +infinity)
- (-infinity, 0)
- (0, 1)
Ans. The probability function takes the value which it is passed with and turns it into a probability. Meaning the range of any function is clamped in between zero and one. However, the odds function does one thing it takes the value from the probability function and makes the range of it from zero to infinity.
So, the effective input to the log function would be from zero to infinity. We know that the log function range in this domain Is the entire real number line or negative infinity to positive infinity. So, the answer to this question is option A.
Q12. Choose the option which you think is TRUE from the list below:
- The error values in the case of Linear regression have to follow a normal distribution, but in the case of logistic regression, the values do not have to follow a standard normal distribution.
- The error values in the case of Logistic regression have to follow a normal distribution, but in the case of Linear regression, the values do not have to follow a standard normal distribution.
- The error values in the case of both Linear regression and Logistic regression has to follow a normal distribution.
- The error values in the case of both Linear regression and Logistic regression do not have to follow a normal distribution.
Ans. The only truthful statement in the bunch of these statements is the first one. So, the answer to the question becomes the option A.
Q13. Choose the correct option(S) from the list of options down below. So, let us say that you have applied the logistic regression model into any given data. The accuracy results that you got are X for the training set and Y for the test set. Now, you would like to add more data points to your model. So, what, according to you, should happen?
- The Accuracy X, which we got in the training data, should increase.
- The Accuracy X, which we got from the training data, should decrease.
- The Accuracy Y, which we got from the test data, should decrease.
- The accuracy Y, which we got from the test data, should increase or remain the same.
Ans. The training accuracy highly depends on the fit the model has on the data, which it has already seen and learned. So, suppose we increase the number of features fed into the model, the training accuracy X increases. In that case, the training accuracy will grow because the model will have to become more complicated to fit the data with an increased number of features properly.
Whereas the testing accuracy only will increase if the feature which is added into the model is an excellent and significant feature or else the model’s accuracy while testing will more or less remain the same. So, the answer to this question would be both options A and D.
Q14. Choose the right option from the following option regarding the method of one vs. all in terms of logistic regression.
- We would need a total of n models to classify between n number of classes correctly.
- We would need an n-1 number of models to classify between n number of classes.
- We would need only one single model to classify between n number of classes successfully.
- None of the options which are mentioned above.
Ans. To classify between n different classes, we are going to need n models in a One vs. All approach.
Q15. Look at the graph below and answer the question by choosing one option from the listed options below. How many local minima do you see in the chart?
- There is just one local minima in the graph.
- There are two local minima in this graph.
- There are three local minima in this graph.
- There are four local minima in this graph.
Ans. Since the graph’s slope becomes zero at four distinct points (where the graph is like U shaped), it is safe to say that it will have four local minima so that the answer would be D.
Also Read: Linear Regression Vs. Logistic Regression
If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.
Is logistic regression difficult to learn?
When it comes to data science, both logistic and linear regression are used extensively to solve different types of computational problems. And to work efficiently in the field of data science, you should understand and be comfortable with both of these kinds of regression models. You might guess from the name that logistic regression uses a more advanced model of equations. So it is kind of more difficult to learn compared to linear regression. However, if you have a basic understanding of how the math works, you can build on it to create packages in R or Python programming.
How important is logistic regression in data science?
To become a successful data scientist, it is essential to understand the pipeline of acquiring and processing data, understanding data and building a model, evaluating outcomes, and deploying it. And logistic regression is invaluable for understanding this whole pipeline concept. When you understand logistic regression, you automatically develop a much better understanding of machine learning concepts. Moreover, sometimes you can easily solve highly complicated problems using only logistic regression, especially for non-linear problems. Logistic regression is a vital statistical tool, and statistics is an inseparable part of machine learning. And if you wish to study neural networks, knowing logistic regression will offer an excellent head start.
Is logistic regression actually useful?
In spite of its name, logistic regression is a classification framework, in reality, more than regression. It presents a more efficient and simpler method or algorithm that can be used to solve binary classification problems in machine learning. You can easily realize it and achieve excellent performance for classes that are linearly separable. However, when there are several decision boundaries that are non-linear, logistic regression has a tendency to underperform. In some cases, more compact algorithms like neural networks are said to be more efficient and powerful.