In statistical analysis, regression models are mostly used whenever necessary to develop relationships between the variables considered. The relationship is established by fitting a line between all the variables. To understand the behavior of the dependent variable, regression models are used. They let the user know how the dependent variables are changing with the change of the independent variables.
Multiple linear regressions are one such technique that helps us estimate the relationship between those variables, i.e., the dependent and the independent variables. This article will focus on the technique of multiple linear regressions and how it is carried out.
Table of Contents
Multiple Linear Regressions
Multiple linear regressions are a form of statistical technique used to predict the outcomes of any response variable. One of the goals of the technique is to establish a linear relationship between the independent and the dependent variables. Multiple linear regression analysis is a form of multivariate analysis that involves more than one form of observation.
Mostly the technique can be carried out if you want to know about the following things:
- To understand how strong the relationship between variables is. Also, if you want to understand the relationship between the independent and the dependent variables, then in those cases, we can use the technique of multiple linear regressions.
- The technique can be used to predict the value of the dependent variables corresponding to the independent variables.
Assumptions Considered in the Multiple Linear Regressions
Certain assumptions are considered in the techniques of multiple linear regressions. Here are some listed assumptions for MLR:
1. Homogeneity of variance
It is also known as homoscedasticity. This means that while predicting an outcome, there are no significant changes in the error associated with the prediction of the outcome through the values of independent variables. The method assumes that the error amount is the same throughout the model of MLR. The analyst must plot the residuals that are standardized against the predicted values. This helps in determining if there is a fair distribution of points across the independent variables. A scatterplot can be used for plotting the data.
2. Independence of observations
The observations considered in the Multiple Linear Regression are collected through valid statistical techniques. This means that there are no hidden or existing relationships between the collected variables. Sometimes, in this technique, there are scenarios where some variables are correlated with other variables. Therefore, before developing the regression model, it is always important to check for these correlated variables. Removing one of the variables from the model development is always better for variables that show a high correlation.
3. There is no correlation between the independent variables
In another way, it can be mentioned that there should not be any multicollinearity in the data. If there is a presence of any multicollinearity, the analyst will find it difficult to identify the variable contributing to the dependent variable variance. Therefore, one of the methods that are considered best for testing the assumption is the method of variation inflation factor.
This means that the dataset follows the normal distribution.
While searching for the relationship between the variables, a straight line gets tried to be fitted between the variables. It is widely assumed that there is the existence of a linear relationship between the independent variables and the dependent variables. One way for checking the linear relationship is through the creation of scatterplots and then visualizing the scatterplots. It enables the user to observe the linearity existing in the observations. If in case there is no linear relationship, then the analyst has to repeat his analysis. Statistical software such as SPSS can be used for performing the MLR.
Mathematical Representation of Multiple Linear Regression
The mathematical picture of a Multiple Linear Regression model is shown in the below equation:
In the above equation,
- Y represents the output variable,
- X represents the input variables,
- Β represents the coefficient associated with each term.
- B0 is the value of y-intercept which means the value of Y when all the other predictors are absent.
Sometimes the equation of MLR consists of an error term represented with the term “e” at the end of the terms in the equation.
While finding the best fit of the line, the MLR equation is used to calculate the following things:
- Calculation of the regression coefficients that result in the slightest error in the MLR equation.
- For the overall model, the equation calculates the t-statistic value.
- P-value of the model.
Ordinary Least Squares
The method of Multiple Linear Regression is also known as the Ordinary Least Squares (OLS). This is because the method of MLR attempts to find the least sum of squares. Hence, also known as the OLS method. The programming language python can be used for implementing these methods. The two methods that can apply the OLS method in python are:
1. SciKit Learn
This is an available package in a python programming language. The Linear regression modules are to be imported from the package of Scikit Learn. The model is then fitted with the data. It is a straightforward method and can be used widely.
One of the other methods used in the python programming language is the package of Statsmodels. This package can help in implementing the OLS techniques.
Multiple Linear Regressions Examples
A few of the examples for MLR are listed below:
- The Multiple Linear Regression model can be used for the prediction of crop yields. This is because, in MLR, there is an association between the dependent and the independent variables. In such types of studies, additional factors such as climate factors, rainfall, level of fertilizer, and temperature can be considered.
- If a connection has to be established between the number of hours of a study conducted and the class GPA, then the MLR method can be used. In such cases, GPA will be the dependent variable while the other variable, such as study hours, will be the explanatory variable.
- The technique of MLR can be used for determining the executive’s salary in a company based on the experience and the age of the executives. In such cases, the salary will become the dependent variable, while age and experience will be the independent variable.
Workflow of the MLR
The data is to be prepared and analyzed before going into the regression model. The data is mostly analyzed for the presence of any errors, outliers, missing values, etc. Here are a few steps listed to show you how to implement or apply the multiple linear regression techniques.
1. Choosing variables
The MLR requires having a dataset containing the predictor values that have the most relationship with the response variable. This means that the maximum information should be extracted from a minimum number of variables. The selection of the variables can be carried out from the following processes.
- An automatic procedure can be opted for searching the variables. Tools can be used along with R and Python’s programming packages to decide the best variables for the MLR study.
- All-possible regression can be opted for checking the presence of any subparts of any independent variables.
- The value of R2 can be considered for analyzing the best variables. Those variables with a greater value of R2 are considered the best fit in the model. The values of the R2 can be out of the two numbers, 0 and 1. The value 0 signifies that none of the independent variables can predict the outcome of the dependent variables. The value of 1 signifies the prediction by the independent variables and without errors.
- There is also another term which is the predicted sum of squares (PRESSp). If the model of MLR has a smaller PRESSp, then the model is considered to have better predictive strength.
2. Model refinement
The model of MLR can be improved through the examination of the following criteria:
- The value of the Global F-test. This is used for testing the significance of predicting the outcome of the dependent variable by the independent variable.
- Adjusted R2 for checking the variation of the complete sample after the parameters and sample size has been adjusted. The larger value of the term indicates that variables are better fitting the data.
- Root mean square deviation or the RMSE is used to estimate standard deviation for random errors.
- The model of MLR is considered to be giving accurate predictions if the value of the Coefficient of Variation is 10% or less than that.
3. Testing model assumptions
The assumptions considered are tested in the model of linear regression. These assumptions should be satisfied.
4. Addressing the problems associated with the model
In cases where some of the assumptions considered in the model are violated, then steps should be taken to minimize such problems.
5. Model validation
This is the last step in the MLR model generation and is considered an important one. After the model generation, the model needs to be validated. Once it is validated, it can be used for any Multiple Linear Regression analysis.
Multiple Linear Regression is one of the most widely used techniques in any research study to establish the correlation between the variables. It is also considered to be an important algorithm in the world of machine learning. However, if you are new to regression analysis, it is always better to get an idea of the regression models and the simple linear regressions.
Get Machine Learning Courses from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.