Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconLogistic Regression in R: Equation Derivation [With Example]

Logistic Regression in R: Equation Derivation [With Example]

Last updated:
28th Jul, 2020
Read Time
11 Mins
share image icon
In this article
Chevron in toc
View All
Logistic Regression in R: Equation Derivation [With Example]

In this article, we’ll discuss one of the most common yet challenging concepts in machine learning, logistic regression. You’ll find what logistic regression is and the derivation of the logistic regression equation in this detailed article. 

We’ve also shared an example of logistic regression in R to understand the concept with much ease. However, ensure that you know all the ideas reasonably well before you work on the example. It would be helpful if you’re familiar with linear regression because both of these concepts are interlinked.

What is Logistic Regression?

Logistic regression predicts a binary outcome according to a set of independent variables. It is a classification algorithm that predicts the probability of an event’s occurrence using a logit function and fitting data to it. Logistic regression is different from linear regression as it can predict the likelihood of a result that can only have two values. Using linear regression is not suitable when you have a binary variable because:

  • The linear regression would predict values outside the required range
  • The regression might not distribute the two benefits across one predicted line

Logistic regression doesn’t produce a line as a linear regression does. It provides a logistic curve that ranges between 0 and value more than 1. 

Learn data science online courses from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Check out: R Project Ideas

Logistic Regression Equation Derivation

We can derive the logistic regression equation from the linear regression equation. Logistic regression falls under the class of glm algorithms (Generalized Linear Model). Nelder and Wedderburn introduced this model in 1972 as a method of using linear regression to solve problems that it couldn’t solve before. They had proposed a class of separate models and had added logistic regression as a special one. 

We know that the equation of a generalized linear model is the following:

g(e<y) = a + bx1

g() stands for the link function, E(y) stands for the expectation of the target variable, and the RHS (right-hand side) is the linear predictor. The link function ‘links’ the expectation of y with the linear predictor. 

Suppose we have data of 100 clients, and we need to predict whether a client will buy a specific product or not. As we have a categorical outcome variable, we must use logistic regression. 

We’ll start with a linear regression equation:

g(y) = o+(income) — (1)

Here, we’ve kept the independent variable as ‘income’ for ease of understanding. 

Our focus is on the probability of the resultant dependent variable (will the customer buy or not?). As we’ve already discussed, g() is our link function, and it is based on the Probability of Success (p) and Probability of Failure (1-p). p should have the following qualities:

  • p should always be positive 
  • p should always be less than or equal to 1

Now, we’ll denote g() with ‘p’ and derive our logistic regression equation. 

As probability is always positive, we’ll cover the linear equation in its exponential form and get the following result:

p = exp(0+(income)) = e((0+(income)) — (2)

We’ll have to divide p by a number greater than p to make the probability less than 1:

p = exp(0+(income)) / (0+(income)) + 1 = e(0+(income)) / (0+(income)) + 1 — (3)

By using eq. (1), (2), and (3), we can define p as:

p = ey /1 + ey — (4)

Here, p is the probability of success, so 1-p must be the probability of failure:

q = 1 – p = 1 -(ey /1 + ey) — (5)

Let’s now divide (4) by (5):

p / 1 – p = ey

If we take log on both sides, we get the following:

log (p / 1 – p) = y

This is the link function. When we substitute the value of y we had established previously, we get:

log(p / 1 – p) = o + (income)

And there we have it, the logistic regression equation. As it provides the probability of a result, its value always remains between 0 and above 1. 

Read About: 9 Interesting Linear Regression Project Ideas & Topics For Beginners

Example of Logistic Regression in R

In our case of logistic regression in R, we’re using data from UCLA (University of California, Los Angeles). Here, we have to create a model that predicts the chances of getting admit according to the data we have. We have four variables, including GPA, GRE score, the rank of the student’s undergraduate college, and confess. 

df <- read.csv(“”)


## ‘data.frame’: 400 obs. of 4 variables:

## $ admit: int 0 1 1 1 0 1 1 0 1 0 …

## $ gre : int 380 660 800 640 520 760 560 400 540 700 …

## $ gpa : num 3.61 3.67 4 3.19 2.93 3 2.98 3.08 3.39 3.92 …

Explore our Popular Data Science Online Courses

$ rank : int 3 3 1 4 4 2 1 2 3 2 …


Variables are either number or integer:


## [1] 0

We also find that there are no null values, and there are more events of rejects than of acceptance because the mean of the variable limit is smaller than 0.5.

You should make sure that the system distributes admits appropriately in every category of rank. Suppose one rank has only 5 rejects (or admit information), then you don’t necessarily have to use that rank in your analysis. 

Read our popular Data Science Articles

xtabs(~ admit +rank ,data=df)

## rank

## admit 1 2 3 4

## 0 28 97 93 55

## 1 33 54 28 12

Let’s run our function now:

df$rank <- as.factor(df$rank)

logit <- glm(admit ~ gre+gpa+rank,data=df,family=”binomial”)



## Call:

## glm(formula = admit ~ gre + gpa + rank, family = “binomial”,

## data = df)


## Deviance Residuals:

## Min 1Q Median 3Q Max 

## -1.6268 -0.8662 -0.6388 1.1490 2.0790 


## Coefficients:

## Estimate Std. Error z value Pr(>|z|)   

## (Intercept) -3.989979 1.139951 -3.500 0.000465 ***

## gre 0.002264 0.001094 2.070 0.038465 * 

## gpa 0.804038 0.331819 2.423 0.015388 * 

## rank2 -0.675443 0.316490 -2.134 0.032829 * 

## rank3 -1.340204 0.345306 -3.881 0.000104 ***

## rank4 -1.551464 0.417832 -3.713 0.000205 ***

## —

## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1


## (Dispersion parameter for binomial family taken to be 1)


## Null deviance: 499.98 on 399 degrees of freedom

## Residual deviance: 458.52 on 394 degrees of freedom

## AIC: 470.52


## Number of Fisher Scoring iterations: 4

You must’ve noticed that we have converted the rank variable to factor from integer before running the function. Make sure that you do the same. 

upGrad’s Exclusive Data Science Webinar for you –

Watch our Webinar on The Future of Consumer Data in an Open Data Economy

Top Data Science Skills to Learn to upskill

Final Result:

Suppose a student’s GPA is 3.8, a GRE score of 790, and he studied in a rank-1 college. Let’s find his chances of getting admit in the future by using our model:

x <- data.frame(gre=790,gpa=3.8,rank=as.factor(1))

p<- predict(logit,x)


## 1

## 0.85426

Our model predicts that the boy has an 85% chance of getting the admit in the future. 

Also Read: Machine Learning Project Ideas

Final Thoughts

That’s it for this article. We’re confident that you’d have found it quite helpful. If you have any questions or thoughts on logistic regression and its related topics, please share them in the comment section below. 

If you are curious to learn about R, everything about data science, check out IIIT-B & upGrad’s Executive PG Program in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Regularization Techniques in Logistic Regression

L1 and L2 Regularization:

  • Regularization methods like L1 and L2 regularization are crucial in logistic regression in R.
  • These techniques address issues like overfitting and improve the model’s generalization capabilities.
  • Implement regularization in R using packages like glmnet or caret.
  • Penalty terms are introduced to the logistic regression equation, controlling the impact of each independent variable and enhancing the model’s predictive performance.

Model Evaluation and Performance Metrics: To assess the accuracy and reliability of a logistic regression model, it’s essential to evaluate its performance. In R, you can utilize various performance metrics like accuracy, precision, recall, F1-score, and area under the ROC curve (AUC-ROC). These metrics provide insights into the model’s ability to correctly classify instances and quantify the trade-offs between true positives and false positives. Techniques such as cross-validation and train-test splits help assess the model’s robustness and prevent overfitting.

Dealing with Imbalanced Data

Imbalanced datasets, where one class has significantly fewer instances than the other, are prevalent in real-world scenarios. Handling imbalanced data is crucial to prevent biased predictions and ensure reliable model performance. In logistic regression in R programming, where the goal is to accurately classify instances into binary outcomes, imbalanced data can pose challenges and lead to skewed results.

R provides several effective techniques to address the issue of imbalanced datasets and improve the model’s performance in such scenarios. These techniques include oversampling, undersampling, and the synthetic minority oversampling technique (SMOTE). Let’s explore each technique in more detail:

  1. Oversampling: Oversampling involves increasing the number of instances in the minority class to match the majority class. This technique aims to balance the dataset by creating synthetic or replicated samples of the minority class. By increasing the representation of the minority class, oversampling helps the model capture the patterns and characteristics of both classes more accurately.
  2. Undersampling: Undersampling, on the other hand, involves reducing the number of instances in the majority class to achieve a balanced dataset. This technique randomly removes instances from the majority class, eliminating the class imbalance. Undersampling can be a useful approach when the majority class has a large number of redundant or similar instances, and reducing their quantity does not significantly impact the overall information contained in the dataset with logit in R.
  3. Synthetic Minority Oversampling Technique (SMOTE): SMOTE is a popular technique that generates synthetic instances of the minority class to balance the dataset. Instead of simply replicating instances, SMOTE creates synthetic samples by interpolating between existing instances of the minority class. By introducing synthetic examples, SMOTE diversifies the dataset and helps the model learn more robust decision boundaries.

By applying these techniques, you can address the class imbalance issue in logistic regression. This balancing act allows the model to train on a representative dataset and make accurate predictions for both classes. It is important to note that the choice of oversampling, undersampling, or SMOTE depends on the specific characteristics of the dataset and the problem at hand. Experimentation and evaluation of different techniques are crucial to find the most effective approach.

Multicollinearity and Feature Selection


  • Multicollinearity refers to a high correlation among independent variables in logistic regression.
  • It leads to unstable coefficient estimates and decreased interpretability.
  • Detect and mitigate multicollinearity in R using methods like variance inflation factor (VIF) analysis and correlation matrices.

Feature Selection Techniques:

  • Feature selection helps identify the most influential variables and improves model efficiency.
  • R offers various feature selection techniques such as stepwise regression, lasso regression, and recursive feature elimination.
  • These techniques assist in selecting a subset of relevant features and improving model interpretability and performance.

Interpreting Logistic Regression Coefficients: Understanding the impact of each independent variable on the outcome is a fundamental aspect of logistic regression. The estimated coefficients offer insights into the direction and magnitude of the relationships between predictors and the log odds of the binary outcome. By exponentiating the coefficients, they can be interpreted as odds ratios, indicating how the odds of the outcome change with a unit increase in the predictor. R’s summary output of the logistic regression model provides these coefficients along with their standard errors, z-values, and p-values.


Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1How are logistic regression and linear regression different from each other?

Linear regression's goal is to identify the best-fitting line, but logistic regression goes one step farther and fits the line values to the sigmoid curve. The main difference between these two methods is that logistic regression is applied when the dependent variable is binary. When the dependent variable is continuous and the regression line is linear, linear regression is used. While the ordinary least squares technique is used to estimate linear regression, the maximum likelihood estimation method is used to estimate logistic regression. The output of linear regression is continuous, but that of logistic regression has only a restricted number of discrete potential values.

2When is the use of logistic regression helpful?

The categorical dependent variable is predicted using logistic regression. When the data we have can be measured on an infinite scale, we can apply logistic regression. For estimating the likelihood of an occurrence, logistic regression analysis is useful. It assists in determining the probability of any two classes. Only classification and probability outcomes may be predicted using logistic regression. It may be used to solve a variety of classification issues like spam detection, diabetes prediction, cancer diagnosis, and so on.

3What are the limitations of using logistic regression?

1. Since logistic regression has a linear decision surface, it cannot address non-linear issues.

2. The logistic regression algorithm is sensitive to outliers.

3. As both scaling and normalization are key criteria of Logistic Regression, data preparation can be time consuming.

4. If a feature exists that completely separates two classes, the model can no longer be trained. This is termed as 'complete separation.'

5. If the number of observations is fewer than the number of features, logistic regression should not be applied as it may result in overfitting.

6. Another disadvantage is that each data point in logistic regression needs to be independent of all other data points. When observations are connected, the model tends to overestimate the relevance of those observations.

Explore Free Courses

Suggested Blogs

Data Mining Techniques &#038; Tools: Types of Data, Methods, Applications [With Examples]
Why data mining techniques are important like never before? Businesses these days are collecting data at a very striking rate. The sources of this eno
Read More

by Rohit Sharma

07 Jul 2024

An Overview of Association Rule Mining &#038; its Applications
Association Rule Mining in data mining, as the name suggests, involves discovering relationships between seemingly independent relational databases or
Read More

by Abhinav Rai

07 Jul 2024

What is Decision Tree in Data Mining? Types, Real World Examples &#038; Applications
Introduction to Data Mining In its raw form, data requires efficient processing to transform into valuable information. Predicting outcomes hinges on
Read More

by Rohit Sharma

04 Jul 2024

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

04 Jul 2024

Most Common Binary Tree Interview Questions &#038; Answers [For Freshers &#038; Experienced]
Introduction Data structures are one of the most fundamental concepts in object-oriented programming. To explain it simply, a data structure is a par
Read More

by Rohit Sharma

03 Jul 2024

Data Science Vs Data Analytics: Difference Between Data Science and Data Analytics
Summary: In this article, you will learn, Difference between Data Science and Data Analytics Job roles Skills Career perspectives Which one is right
Read More

by Rohit Sharma

02 Jul 2024

Graphs in Data Structure: Types, Storing &#038; Traversal
In my experience with Data Science, I’ve found that choosing the right data structure is crucial for organizing information effectively. Graphs
Read More

by Rohit Sharma

01 Jul 2024

Python Banking Project [With Source Code] in 2024
The banking sector has many applications for programming and IT solutions. If you’re interested in working on a project for the banking sector,
Read More

by Rohit Sharma

25 Jun 2024

Linear Search vs Binary Search: Difference Between Linear Search &#038; Binary Search
In my journey through data structures, I’ve navigated the nuances of linear search vs binary search in data structure, especially when dealing w
Read More

by Rohit Sharma

23 Jun 2024

Want to build a career in Data Science?Download Career Growth Report
footer sticky close icon