Programs

Assumptions of Linear Regression: 5 Assumptions With Examples

Regression is used to gauge and quantify cause-and-effect relationships. Regression analysis is a statistical technique used to understand the magnitude and direction of a possible causal relationship between an observed pattern and the variables assumed that impact the given observed pattern.

Top Machine Learning Courses & AI Courses Online

For instance, if there is a 20% reduction in the price of a product, say, a moisturiser, people are likely to buy it, and sales are likely to increase.

Here, the observed pattern is an increase in sales (also called the dependent variable). The variable assumed to impact sales is the price (also called the independent variable). 

Trending Machine Learning Skills

Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.

What Is Linear Regression?

Linear regression is a statistical technique that models the magnitude and direction of an impact on the dependent variable explained by the independent variables. Linear regression is commonly used in predictive analysis.

Linear regression explains two important aspects of the variables, which are as follows:

  • Does the set of independent variables explain the dependent variable significantly?
  • Which variables are the most significant in explaining the dependent available? In which way do they impact the dependent variable? The impact is usually determined by the magnitude and the sign of the beta coefficients in the equation.

Now, let’s look at the assumptions of linear regression, which are essential to understand before we run a linear regression model.

Read more: Linear Regresison Model & How it works?

Assumptions of Linear Regression

Linear relationship

One of the most important assumptions is that a linear relationship is said to exist between the dependent and the independent variables. If you try to fit a linear relationship in a non-linear data set, the proposed algorithm won’t capture the trend as a linear graph, resulting in an inefficient model. Thus, it would result in inaccurate predictions.

FYI: Free nlp course!

How can you determine if the assumption is met?

The simple way to determine if this assumption is met or not is by creating a scatter plot x vs y. If the data points fall on a straight line in the graph, there is a linear relationship between the dependent and the independent variables, and the assumption holds.

What should you do if this assumption is violated?

If a linear relationship doesn’t exist between the dependent and the independent variables, then apply a non-linear transformation such as logarithmic, exponential, square root, or reciprocal either to the dependent variable, independent variable, or both. 

No auto-correlation or independence

The residuals (error terms) are independent of each other. In other words, there is no correlation between the consecutive error terms of the time series data. The presence of correlation in the error terms drastically reduces the accuracy of the model. If the error terms are correlated, the estimated standard error tries to deflate the true standard error.

How to determine if the assumption is met?

Conduct a Durbin-Watson (DW) statistic test. The values should fall between 0-4. If DW=2, no auto-correlation; if DW lies between 0 and 2, it means that there exists a positive correlation. If DW lies between 2 and 4, it means there is a negative correlation. Another method is to plot a graph against residuals vs time and see patterns in residual values.

What should you do if this assumption is violated?

If the assumption is violated, consider the following options:

  • For positive correlation, consider adding lags to the dependent or the independent or both variables.
  • For negative correlation, check to see if none of the variables is over-differenced.
  • For seasonal correlation, consider adding a few seasonal variables to the model.

No Multicollinearity

The independent variables shouldn’t be correlated. If multicollinearity exists between the independent variables, it is challenging to predict the outcome of the model. In essence, it is difficult to explain the relationship between the dependent and the independent variables. In other words, it is unclear which independent variables explain the dependent variable.

The standard errors tend to inflate with correlated variables, thus widening the confidence intervals leading to imprecise estimates.

How to determine if the assumption is met?

Use a scatter plot to visualise the correlation between the variables. Another way is to determine the VIF (Variance Inflation Factor). VIF<=4 implies no multicollinearity, whereas VIF>=10 implies serious multicollinearity.

What should you do if this assumption is violated?

Reduce the correlation between variables by either transforming or combining the correlated variables.

Must Read: Types of Regression Models in ML

Homoscedasticity

Homoscedasticity means the residuals have constant variance at every level of x. The absence of this phenomenon is known as heteroscedasticity. Heteroscedasticity generally arises in the presence of outliers and extreme values.

How to determine if the assumption is met?

Create a scatter plot that shows residual vs fitted value. If the data points are spread across equally without a prominent pattern, it means the residuals have constant variance (homoscedasticity). Otherwise, if a funnel-shaped pattern is seen, it means the residuals are not distributed equally and depicts a non-constant variance (heteroscedasticity).

What should you do if this assumption is violated?

  • Transform the dependent variable
  • Redefine the dependent variable
  • Use weighted regression

Normal distribution of error terms

The last assumption that needs to be checked for linear regression is the error terms’ normal distribution. If the error terms don’t follow a normal distribution, confidence intervals may become too wide or narrow.

How to determine if the assumption is met?

Check the assumption using a Q-Q (Quantile-Quantile) plot. If the data points on the graph form a straight diagonal line, the assumption is met.

You can also check for the error terms’ normality using statistical tests like the Kolmogorov-Smironov or Shapiro-Wilk test.

What should you do if this assumption is violated?

  • Verify if the outliers have an impact on the distribution. Make sure they are real values and not data-entry errors.
  • Apply non-linear transformation in the form of log, square root, or reciprocal to the dependent, independent, or both variables.

Popular AI and ML Blogs & Free Courses

Conclusion

Leverage the true power of regression by applying the techniques discussed above to ensure the assumptions are not violated. It is indeed feasible to comprehend the independent variables’ impact on the dependent variable if all the assumptions of linear regression are met.

The concept of linear regression is an indispensable element of data science and machine learning programs. 

If you’re interested to learn more about regression models and more of machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Why is homoscedasticity required in linear regression?

Homoscedasticity describes how similar or how far the data deviates from the mean. This is an important assumption to make because parametric statistical tests are sensitive to differences. Heteroscedasticity does not induce bias in coefficient estimations, but it does reduce their precision. With lower precision, the coefficient estimates are more likely to be off from the correct population value. To avoid this, homoscedasticity is a crucial assumption to assert.

What are the two types of multicollinearity in linear regression?

Data and structural multicollinearity are the two basic types of multicollinearity. When we make a model term out of other terms, we get structural multicollinearity. In other words, rather than being present in the data itself, it is a result of the model that we provide. While data multicollinearity is not an artefact of our model, it is present in the data itself. Data multicollinearity is more common in observational investigations.

What are the drawbacks of using t-test for independent tests?

There are issues with repeating measurements instead of differences across group designs when using paired sample t-tests, which leads to carry-over effects. Due to type I errors, the t-test cannot be used for multiple comparisons. It will be difficult to reject the null hypothesis when doing a paired t-test on a set of samples. Obtaining the subjects for the sample data is a time-consuming and costly aspect of the research process.

Want to share this article?

Lead the AI Driven Technological Revolution

PG DIPLOMA IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
Learn More

Leave a comment

Your email address will not be published. Required fields are marked *

Our Popular Machine Learning Course

Get Free Consultation

Leave a comment

Your email address will not be published. Required fields are marked *

×
Get Free career counselling from upGrad experts!
Book a session with an industry professional today!
No Thanks
Let's do it
Get Free career counselling from upGrad experts!
Book a Session with an industry professional today!
Let's do it
No Thanks