Multiple Linear Regression in R [With Graphs & Examples]

As a data scientist, you are frequently asked to make predictive analysis in many projects. An analysis is a statistical approach for establishing a relationship between a dependent variable with a set of independent variables. This whole concept can be termed as a linear regression, which is basically of two types: simple and multiple linear regression.

R is one of the most important languages in terms of data science and analytics, and so is the multiple linear regression in R holds value. It describes the scenario where a single response variable Y depends linearly on multiple predictor variables.

What is a Linear Regression?

Linear regression models are used to show or predict the relationship between a dependent and an independent variable. When there are two or more independent variables used in the regression analysis, the model is not simply linear but a multiple regression model.

Simple linear regression is used for predicting the value of one variable by using another variable. A straight line represents the relationship between the two variables with linear regression.

There is a linear relationship between a dependent variable with two or more independent variables in multiple regression. The relationship can also be non-linear, and the dependent and independent variables will not follow a straight line.

Pictorial representation of Multiple linear regression model predictions

Linear and non-linear regression are used to track a response using two or more variables. The non-linear regression is created from assumptions from trial and error and is comparatively difficult to execute.

What is Multiple Linear Regression?

Multiple linear regression is a statistical analysis technique used to predict a variable’s outcome based on two or more variables. It is an extension of linear regression and also known as multiple regression. The variable to be predicted is the dependent variable, and the variables used to predict the value of the dependent variable are known as independent or explanatory variables.

The multiple linear regression enables analysts to determine the variation of the model and each independent variable’s relative contribution. Multiple regression is of two types, linear and non-linear regression.

Multiple Regression Formula

The multiple regression with three predictor variables (x) predicting variable y is expressed as the following equation:

 y = z0 + z1*x1 + z2*x2 + z3*x3

The “z” values represent the regression weights and are the beta coefficients. They are the association between the predictor variable and the outcome.

  • yi is dependent or predicted variable
  • z0 is the y-intercept, i.e., the value of y when x1 and x2 are 0
  • z1 and z2 are the regression coefficients representing the change in y related to a one-unit change in x1 and x2, respectively.

Assumptions of Multiple Linear Regression

We have known the brief about multiple regression and the basic formula. However, there are some assumptions of which the multiple linear regression is based on detailed as below:

i. Relationship Between Dependent And Independent Variables

The dependent variable relates linearly with each independent variable. To check the linear relationships, a scatterplot is created and is observed for the linearity. If the scatterplot relationship is non-linear, then a non-linear regression is performed, or the data is transferred using statistical software.

ii. The Independent Variables Are Not Much Correlated

The data should not display multicollinearity, which happens in case the independent variables are highly correlated to each other. This will create problems in fetching out the specific variable contributing to the variance in the dependent variable.

iii. The Residual Variance is Constant

Multiple linear regression assumes that the remaining variables’ error is similar at each point of the linear model. This is known as homoscedasticity. When the data analysis is done, the standard residuals against the predicted values are plotted to determine if the points are properly distributed across independent variables’ values.

iv. Observation Independence

The observations should be of each other, and the residual values should be independent. The Durbin Watson statistic works best for this.

The method shows values from 0 to 4, where a value between 0 and 2 shows positive autocorrelation, and from 2 to 4 shows negative autocorrelation. The midpoint, a value of 2, shows there is no autocorrelation.

v. Multivariate Normality

Multivariate normality happens with normally distributed residuals. For this assumption, it is observed how the values of residuals are distributed. It can be tested using two methods,

· A histogram showing a superimposed normal curve and

· The Normal Probability Plot method.

Instances Where Multiple Linear Regression is Applied

Multiple linear regression is a very important aspect from an analyst’s point of view. Here are some of the examples where the concept can be applicable:

i. As the value of the dependent variable is correlated to the independent variables, multiple regression is used to predict the expected yield of a crop at certain rainfall, temperature, and fertilizer level.

ii. Multiple linear regression analysis is also used to predict trends and future values. This is particularly useful to predict the price for gold in the six months from now.

iii. In a particular example where the relationship between the distance covered by an UBER driver and the driver’s age and the number of years of experience of the driver is taken out. In this regression, the dependent variable is the distance covered by the UBER driver. The independent variables are the age of the driver and the number of years of experience in driving.

iv. Another example where multiple regressions analysis is used in finding the relation between the GPA of a class of students and the number of hours they study and the students’ height.  The dependent variable in this regression is the GPA, and the independent variables are the number of study hours and the heights of the students.

v. The relation between the salary of a group of employees in an organization and the number of years of exporganizationthe employees’ age can be determined with a regression analysis. The dependent variable for this regression is the salary, and the independent variables are the experience and age of the employees.

Also Read: 6 Types of Regression Models in Machine Learning You Should Know About

Multiple Linear Regression in R

There are many ways multiple linear regression can be executed but is commonly done via statistical software. One of the most used software is R which is free, powerful, and available easily. We will first learn the steps to perform the regression with R, followed by an example of a clear understanding.

Steps to Perform Multiple Regression in R

  1. Data Collection: The data to be used in the prediction is collected.
  2. Data Capturing in R: Capturing the data using the code and importing a CSV file
  3. Checking Data Linearity with R: It is important to make sure that a linear relationship exists between the dependent and the independent variable. It can be done using scatter plots or the code in R
  4. Applying Multiple Linear Regression in R: Using code to apply multiple linear regression in R to obtain a set of coefficients.
  5. Making Prediction with R: A predicted value is determined at the end.

Multiple Regression Implementation in R

We will understand how R is implemented when a survey is conducted at a certain number of places by the public health researchers to gather the data on the population who smoke, who travel to the work, and the people with a heart disease. 

Step-by-Step Guide for Multiple Linear Regression in R:

i. Load the heart.data dataset and run the following code

lm<-lm(heart.disease ~ biking + smoking, data = heart.data)

The data set heart. Data calculates the effect of the independent variables biking and smoking on the dependent variable heart disease using ‘lm()’ (the equation for the linear model).

ii. Interpreting Results

use the summary() function to view the results of the model:

summary(heart.disease.lm)

This function puts the most important parameters obtained from the linear model into a table that looks as below:

From this table we can infer:

  • The formula of ‘Call’,
  • The residuals of the model (‘Residuals’). If the residuals are roughly centred around zero and with similar spread on either side (median 0.03, and min and max -2 and 2), then the model fits heteroscedasticity assumptions.
  • The regression coefficients of the model (‘Coefficients’).      

Row 1 of the coefficients table (Intercept): This is the y-intercept of the regression equation and used to know the estimated intercept to plug in the regression equation and predict the dependent variable values.

heart disease = 15 + (-0.2*biking) + (0.178*smoking) ± e

Some Terms Related To Multiple Regression

i.  Estimate Column: It is the estimated effect and is also called the regression coefficient or r2 value. The estimates tell that for every one percent increase in biking to work there is an associated 0.2 percent decrease in heart disease, and for every percent increase in smoking there is a .17 percent increase in heart disease.

ii.  Std.error: It displays the standard error of the estimate. This is a number that shows variation around the estimates of the regression coefficient.

iii.  t Value: It displays the test statistic. It is a t-value from a two-sided t-test.

iv. Pr( > | t | ): It is the p-value which shows the probability of occurrence of t-value.

Reporting the Results

We should include the estimated effect, the standard estimate error, and the p-value.

In the above example, the significant relationships between the frequency of biking to work and heart disease and the frequency of smoking and heart disease were found to be p < 0.001.

The heart disease frequency is decreased by 0.2% (or ± 0.0014) for every 1% increase in biking. The heart disease frequency is increased by 0.178% (or ± 0.0035) for every 1% increase in smoking.

Graphical Representation of the Findings

The effects of multiple independent variables on the dependent variable can be shown in a graph. In this, only one independent variable can be plotted on the x-axis.

Multiple Linear Regression: Graphical Representation

Here, the predicted values of the dependent variable (heart disease) across the observed values for the percentage of people biking to work are plotted.

For the effect of smoking on the independent variable, the predicted values are calculated, keeping smoking constant at the minimum, mean, and maximum rates of smoking.

Also Read: Linear Regression Vs. Logistic Regression: Difference Between Linear Regression & Logistic Regression

Final Words

This marks the end of this blog post. We have tried the best of our efforts to explain to you the concept of multiple linear regression and how the multiple regression in R is implemented to ease the prediction analysis.

If you are keen to endorse your data science journey and learn more concepts of R and many other languages to strengthen your career, join upGrad. We offer the PG Certification in Data Science which is specially designed for working professionals and includes 300+ hours of learning with continual mentorship.

Prepare for a Career of the Future

UPGRAD AND IIIT-BANGALORE'S PG DIPLOMA IN DATA SCIENCE
Enroll Today

Leave a comment

Your email address will not be published. Required fields are marked *

×
Know More