Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

How do I Verify the Assumptions of Linear Regression?
Latest   Machine Learning

How do I Verify the Assumptions of Linear Regression?

Last Updated on July 26, 2023 by Editorial Team

Author(s): Gowtham S R

Originally published on Towards AI.

What are the assumptions of linear regression? and how to verify them with python?

Photo from Unsplash uploaded by Thong Vo

Linear regression is a model that estimates the relationship between independent variables and a dependent variable using a straight line. However, in order to use a linear regression model, we have to verify a few assumptions.

The 5 main assumptions of linear regression are,

  1. A linear relationship between dependent and independent variables.
  2. No/Very less multicollinearity.
  3. Normality of Residuals
  4. Homoscedasticity
  5. No Autocorrelation of Errors

Let's understand each of the above assumptions in detail with the help of python code.

Simple ways to write Complex Patterns in Python in just 4mins.

Easy way to write complex pattern programs in python

medium.com

Import the required libraries, and read the dataset.

Image by author

Separate the dependent and independent features, and split the data into train and test sets as shown below.

Image by author

Create a linear regression model and calculate the residuals.

Image by author

Let us verify the assumptions of linear regression for the above data.

A Complete End-to-End Machine Learning Based Recommendation Project

A machine learning recommendation project based on collaborative filtering and popularity-based filtering

pub.towardsai.net

1. Linear Relationship

In order to perform a linear regression, the first and foremost assumption is to have a linear relationship between the independent and the dependent features. Means β€” As the value of the X increases, the value of y should also increase or decrease linearly. If there are multiple independent features, each of the independent features should have a linear relationship with the dependent feature.

We can verify this assumption using a scatter plot as shown below.

Image by author

In the above scatter plots we can clearly say that features 1 and 3 are having a clear linear relationship with the target. However, feature 2 is not having a linear relationship with the target.

2. Multicollinearity

Multicollinearity is a scenario in which two of the independent features are highly correlated. So, now the question is, what is correlation? Correlation is the scenario in which two variables are strongly related to each other.

Eg, If we have a dataset where age and years_of_experience are the two independent features in our dataset. It is highly possible that as age increases, years_of_experience also increase. So, in this case, age and years of experience are highly positively correlated.

If we have age and years_left_to_retire as independent features, then as age increases, the years_left_to_retire decreases. So, here we say that the two features are highly negatively correlated.

If we have any one of the above scenarios (strong positive correlation or negative correlation), then we say that there is multicollinearity.

We can verify if there is any multicollinearity in our data, using a correlation matrix or VIF as shown in the below figure.

Image by author
Image by author

From the above VIF and correlation matrix, we can say that there is no multicollinearity in our dataset.

If you are interested in understanding multicollinearity in detail, please read my blog on why multicollinearity is a problem

Why multicollinearity is a problem?

What is multicollinearity? and why should we take care of multicollinearity before creating a machine learning model?

medium.com

3. Normality of Residuals

Residual = actual y value βˆ’ predicted y value. Having a negative residual means that the predicted value is too high, similarly, if you have a positive residual, it means that the predicted value was too low. The aim of a regression line is to minimize the sum of residuals.

The assumption says that if we plot the residual, then the plot should be normal or sort of normal.

We can verify this assumption with the help of the KDE plot and Q-Q plot, as shown below.

Image by author
Image by author
Image by author

4. Homoscedasticity

Homo means same and scedasticity means scatter/spread. So, the meaning of homoscedasticity is having same scatter. It means the condition in which the variance of the residual, or error term, in a regression model is constant.

When we plot the residuals, the spread should be equal. We can check this by using a scatter plot, where the x-axis will have the predictions, and the y-axis will have the residuals, as shown in the below figure.

Image by author

The residuals are spread uniformly, which holds the assumption of homoscedasticity.

5. No autocorrelation of errors

This assumption says that there should not be any relationship between the residuals. This can be verified by plotting the residuals as shown in the below figure. The plot should not result in any particular patterns.

Image by author

Anything that can go wrong will go wrong.

List of some of Murphy’s laws that we come across in our daily life

medium.com

Simple ways to write Complex Patterns in Python in just 4mins.

Easy way to write complex pattern programs in python

medium.com

Which Feature Scaling Technique To Use- Standardization vs Normalization

Is feature scaling mandatory? when to use standardization? when to use normalization? what will happen to the…

pub.towardsai.net

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓