Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconArtificial Intelligencebreadcumb forward arrow iconAssumptions of Linear Regression: 5 Assumptions With Examples

Assumptions of Linear Regression: 5 Assumptions With Examples

Last updated:
21st Dec, 2020
Views
Read Time
7 Mins
share image icon
In this article
Chevron in toc
View All
Assumptions of Linear Regression: 5 Assumptions With Examples

In statistical analysis, regression is a vital tool for uncovering cause-and-effect relationships. Personally, I find it to be indispensable in my work. At its core, regression analysis scrutinizes the interplay between observed patterns and the variables believed to influence them. By assessing the magnitude and direction of these relationships, it offers insights into the dynamics of various phenomena. Regression is used to gauge and quantify cause-and-effect relationships. For instance, if there is a 20% reduction in the price of a product, say, a moisturiser, people are likely to buy it, and sales are likely to increase.

Here, the observed pattern is an increase in sales (also called the dependent variable). The variable assumed to impact sales is the price (also called the independent variable). 

In our exploration of Assumptions of Linear Regression, we aim to shed light on five fundamental principles underlying this technique. Through examples, we’ll illustrate how these Assumptions of Linear Regression guide our understanding of causal relationships and shape the interpretation of regression results. By grasping these foundational concepts, analysts like me can wield regression with confidence, extracting meaningful insights from complex datasets.

Top Machine Learning and AI Courses Online

Ads of upGrad blog

Trending Machine Learning Skills

Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.

What Is Linear Regression?

Linear regression is a statistical technique that models the magnitude and direction of an impact on the dependent variable explained by the independent variables. Linear regression is commonly used in predictive analysis.

Linear regression explains two important aspects of the variables, which are as follows:

  • Does the set of independent variables explain the dependent variable significantly?
  • Which variables are the most significant in explaining the dependent available? In which way do they impact the dependent variable? The impact is usually determined by the magnitude and the sign of the beta coefficients in the equation.

Now, let’s look at the assumptions of linear regression, which are essential to understand before we run a linear regression model.

Read more: Linear Regresison Model & How it works?

Assumptions of Linear Regression

Linear relationship

One of the most important assumptions is that a linear relationship is said to exist between the dependent and the independent variables. If you try to fit a linear relationship in a non-linear data set, the proposed algorithm won’t capture the trend as a linear graph, resulting in an inefficient model. Thus, it would result in inaccurate predictions.

FYI: Free nlp course!

How can you determine if the assumption is met?

The simple way to determine if this assumption is met or not is by creating a scatter plot x vs y. If the data points fall on a straight line in the graph, there is a linear relationship between the dependent and the independent variables, and the assumption holds.

What should you do if this assumption is violated?

If a linear relationship doesn’t exist between the dependent and the independent variables, then apply a non-linear transformation such as logarithmic, exponential, square root, or reciprocal either to the dependent variable, independent variable, or both. 

No auto-correlation or independence

The residuals (error terms) are independent of each other. In other words, there is no correlation between the consecutive error terms of the time series data. The presence of correlation in the error terms drastically reduces the accuracy of the model. If the error terms are correlated, the estimated standard error tries to deflate the true standard error.

How to determine if the assumption is met?

Conduct a Durbin-Watson (DW) statistic test. The values should fall between 0-4. If DW=2, no auto-correlation; if DW lies between 0 and 2, it means that there exists a positive correlation. If DW lies between 2 and 4, it means there is a negative correlation. Another method is to plot a graph against residuals vs time and see patterns in residual values.

What should you do if this assumption is violated?

If the assumption is violated, consider the following options:

  • For positive correlation, consider adding lags to the dependent or the independent or both variables.
  • For negative correlation, check to see if none of the variables is over-differenced.
  • For seasonal correlation, consider adding a few seasonal variables to the model.

No Multicollinearity

The independent variables shouldn’t be correlated. If multicollinearity exists between the independent variables, it is challenging to predict the outcome of the model. In essence, it is difficult to explain the relationship between the dependent and the independent variables. In other words, it is unclear which independent variables explain the dependent variable.

The standard errors tend to inflate with correlated variables, thus widening the confidence intervals leading to imprecise estimates.

How to determine if the assumption is met?

Use a scatter plot to visualise the correlation between the variables. Another way is to determine the VIF (Variance Inflation Factor). VIF<=4 implies no multicollinearity, whereas VIF>=10 implies serious multicollinearity.

What should you do if this assumption is violated?

Reduce the correlation between variables by either transforming or combining the correlated variables.

Must Read: Types of Regression Models in ML

Homoscedasticity

Homoscedasticity means the residuals have constant variance at every level of x. The absence of this phenomenon is known as heteroscedasticity. Heteroscedasticity generally arises in the presence of outliers and extreme values.

How to determine if the assumption is met?

Create a scatter plot that shows residual vs fitted value. If the data points are spread across equally without a prominent pattern, it means the residuals have constant variance (homoscedasticity). Otherwise, if a funnel-shaped pattern is seen, it means the residuals are not distributed equally and depicts a non-constant variance (heteroscedasticity).

What should you do if this assumption is violated?

  • Transform the dependent variable
  • Redefine the dependent variable
  • Use weighted regression

Normal distribution of error terms

The last assumption that needs to be checked for linear regression is the error terms’ normal distribution. If the error terms don’t follow a normal distribution, confidence intervals may become too wide or narrow.

How to determine if the assumption is met?

Check the assumption using a Q-Q (Quantile-Quantile) plot. If the data points on the graph form a straight diagonal line, the assumption is met.

You can also check for the error terms’ normality using statistical tests like the Kolmogorov-Smironov or Shapiro-Wilk test.

What should you do if this assumption is violated?

  • Verify if the outliers have an impact on the distribution. Make sure they are real values and not data-entry errors.
  • Apply non-linear transformation in the form of log, square root, or reciprocal to the dependent, independent, or both variables.

Number of observations Greater than the number of predictors  

The number of training data, or observations, must always exceed the number of tests or prediction data for the model to perform well. On the other hand, the performance of the model improves with more data. When there are more predictors than observations, the model becomes over-parameterized and causes problems like overfitting. 

Let’s say we are forecasting home values using characteristics like size, number of bedrooms, and demographics of the surrounding area. The assumption is satisfied if there are 1000 houses(observations) and five predictor variables. 

How to Determine if the Assumption is Met? 

To verify this assumption, count the number of observations (n) and the number of predictors (p). Ensure that n>p. 

What Should You Do if This Assumption is Violated? 

If the number of predictors exceeds the number of observations, consider reducing the number of predictors through feature selection techniques like forward selection, backward elimination, or dimensionality reduction methods such as principal component analysis (PCA). 

Each observation is unique 

Linear regression assumes that each observation in the dataset is independent and identically distributed (IID). This means that the error terms (ε) associated with each observation are not correlated with each other. 

Suppose in a study measuring the effects of a new drug on patients, each patient’s response to the drug is considered an independent observation. 

How to Determine if the Assumption is Met? 

To assess independence, examine whether there are any patterns or correlations in the residuals (errors) of the model. A plot of residuals against the predicted values should show random scattering around zero, indicating independence. 

What Should You Do if This Assumption is Violated? 

If the assumption of independence is violated, it suggests that there may be hidden patterns or dependencies in the data. Consider collecting more data, accounting for any inherent dependencies, or using techniques like time-series analysis for temporal data. 

Ads of upGrad blog

Popular AI and ML Blogs & Free Courses

Conclusion

Understanding the assumptions of linear regression is pivotal for anyone looking to harness the power of this fundamental statistical tool. These assumptions form the bedrock upon which reliable, valid, and interpretable linear regression analyses are built. By meticulously examining and ensuring the adherence to these assumptions with practical examples, researchers and data scientists can significantly improve the accuracy and applicability of their findings. It is through this rigorous adherence to the foundational principles of linear regression that one can unlock the full potential of their data, leading to insights that are both profound and actionable. Remember, the strength of any linear regression analysis lies not just in the application of the technique but in the careful consideration and validation of its underlying assumptions. As we continue to navigate through vast seas of data in various fields, the role of these assumptions will remain undeniably central to achieving clarity, precision, and effectiveness in our analytical endeavors.

If you’re interested to learn more about regression models and more of machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Profile

Pavan Vadapalli

Blog Author
Director of Engineering @ upGrad. Motivated to leverage technology to solve problems. Seasoned leader for startups and fast moving orgs. Working on solving problems of scale and long term technology strategy.
Get Free Consultation

Selectcaret down icon
Select Area of interestcaret down icon
Select Work Experiencecaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Machine Learning Course

Frequently Asked Questions (FAQs)

1Why is homoscedasticity required in linear regression?

Homoscedasticity describes how similar or how far the data deviates from the mean. This is an important assumption to make because parametric statistical tests are sensitive to differences. Heteroscedasticity does not induce bias in coefficient estimations, but it does reduce their precision. With lower precision, the coefficient estimates are more likely to be off from the correct population value. To avoid this, homoscedasticity is a crucial assumption to assert.

2What are the two types of multicollinearity in linear regression?

Data and structural multicollinearity are the two basic types of multicollinearity. When we make a model term out of other terms, we get structural multicollinearity. In other words, rather than being present in the data itself, it is a result of the model that we provide. While data multicollinearity is not an artefact of our model, it is present in the data itself. Data multicollinearity is more common in observational investigations.

3What are the drawbacks of using t-test for independent tests?

There are issues with repeating measurements instead of differences across group designs when using paired sample t-tests, which leads to carry-over effects. Due to type I errors, the t-test cannot be used for multiple comparisons. It will be difficult to reject the null hypothesis when doing a paired t-test on a set of samples. Obtaining the subjects for the sample data is a time-consuming and costly aspect of the research process.

Explore Free Courses

Suggested Blogs

Artificial Intelligence course fees
5542
Artificial intelligence (AI) was one of the most used words in 2023, which emphasizes how important and widespread this technology has become. If you
Read More

by venkatesh Rajanala

29 Feb 2024

Artificial Intelligence in Banking 2024: Examples &#038; Challenges
6374
Introduction Millennials and their changing preferences have led to a wide-scale disruption of daily processes in many industries and a simultaneous g
Read More

by Pavan Vadapalli

27 Feb 2024

Top 9 Python Libraries for Machine Learning in 2024
75793
Machine learning is the most algorithm-intense field in computer science. Gone are those days when people had to code all algorithms for machine learn
Read More

by upGrad

19 Feb 2024

Top 15 IoT Interview Questions &#038; Answers 2024 – For Beginners &#038; Experienced
64582
These days, the minute you indulge in any technology-oriented discussion, interview questions on cloud computing come up in some form or the other. Th
Read More

by Kechit Goyal

19 Feb 2024

Data Preprocessing in Machine Learning: 7 Easy Steps To Follow
153651
Summary: In this article, you will learn about data preprocessing in Machine Learning: 7 easy steps to follow. Acquire the dataset Import all the cr
Read More

by Kechit Goyal

18 Feb 2024

Artificial Intelligence Salary in India [For Beginners &#038; Experienced] in 2024
908996
Artificial Intelligence (AI) has been one of the hottest buzzwords in the tech sphere for quite some time now. As Data Science is advancing, both AI a
Read More

by upGrad

18 Feb 2024

24 Exciting IoT Project Ideas &#038; Topics For Beginners 2024 [Latest]
762483
Summary: In this article, you will learn the 24 Exciting IoT Project Ideas & Topics. Take a glimpse at the project ideas listed below. Smart Agr
Read More

by Kechit Goyal

18 Feb 2024

Natural Language Processing (NLP) Projects &amp; Topics For Beginners [2023]
108114
What are Natural Language Processing Projects? NLP project ideas advanced encompass various applications and research areas that leverage computation
Read More

by Pavan Vadapalli

17 Feb 2024

45+ Interesting Machine Learning Project Ideas For Beginners [2024]
328905
Summary: In this Article, you will learn Stock Prices Predictor Sports Predictor Develop A Sentiment Analyzer Enhance Healthcare Prepare ML Algorith
Read More

by Jaideep Khare

16 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon