Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
When Linear Regression Fails: The Hidden Pitfalls Every Analyst Should Know
Latest   Machine Learning

When Linear Regression Fails: The Hidden Pitfalls Every Analyst Should Know

Last Updated on December 4, 2025 by Editorial Team

Author(s): Siddharth Mahato

Originally published on Towards AI.

“All models are wrong, but some are useful.” ~ George Box

Linear Regression, perhaps the oldest statistical model, is the perfect example of this truth.

When Linear Regression Fails: The Hidden Pitfalls Every Analyst Should Know
Photo by Radek Kilijanek on Unsplash

Introduction: The Deceptive Simplicity of a Straight Line

Linear Regression is often our first encounter with machine learning which is elegant, interpretable, and mathematically neat.
It’s the model that claims: “If I can draw a straight line through your data, I can predict the future.”

But the world, as we soon discover, rarely behaves in a linear way.

From economic trends to stock prices and marketing analytics, data bends, fluctuates, and interacts in ways a single linear equation can’t always capture. This article will lead you to the where and why Linear Regression breaks down, and how you can detect and fix those crevices before your data misleads you.

1. The Foundation: How Linear Regression Sees the World

At its heart, Linear Regression assumes the relationship between dependent and independent variables follows:

Image (Linear Regression Formula)

Where ϵ epsilone represents the “noise” — the part we can’t explain.

But for this equation to make sense, several assumptions must hold true:

  • Linearity of relationship — X and Y must be related through a straight line.
  • Independence of errors — Residuals should not influence each other.
    (Especially important in time-series data.)
  • Homoscedasticity — Error variance must remain constant.
    If residuals “fan out”, your estimates become unreliable.
  • Normality of residuals — Required for statistically valid inference.
  • No multicollinearity — Predictors must not be strongly correlated with each other.
    If they are, the coefficients become unstable and misleading.

Break even one, and your seemingly perfect R² might turn into a statistical illusion.

2. Multicollinearity: The Silent Model Destroyer

While non-linearity and heteroscedasticity get plenty of attention, multicollinearity often goes unnoticed and it causes some of the worst and hardest-to-spot failures.

What is multicollinearity?
It occurs when predictors are strongly correlated, making it difficult for the model to determine which variable is responsible for changes in the target variable.

This results in:

  • unstable coefficients
  • inflated standard errors
  • contradictory signs (+/–) on coefficients
  • meaningless p-values
  • misleading business conclusions

To detect it, analysts use Variance Inflation Factor (VIF):

Image (Variance Inflation Factor)
  • VIF = 1 → ideal
  • VIF = 1–5 → acceptable
  • VIF > 5 → caution
  • VIF > 10 → severe multicollinearity

Let’s look at a real costing example where this becomes critical.

3. Case Study: When a Costing Model Collapsed Because of Multicollinearity

A manufacturing company wanted to understand what drives Total Production Cost.
They built a regression using:

  • Raw Material Cost
  • Labour Cost
  • Machine Hours
  • Overheads

Everything looked perfect on paper — high R², significant predictors, clean summary output.

But the analyst noticed strange symptoms:

  • Labour Cost sometimes appeared with a negative coefficient
  • Coefficients changed drastically with new data
  • p-values were highly unstable
  • Predictions deviated oddly at higher production levels

These are classic signs of multicollinearity.

Image (Mini Table For Case Study)

Labour Cost clearly had severe VIF inflation.

Using the formula:
We can simply put the values and get our R2.

Image (Computation of R-square)

The above image implies that 88.9% of Labour Cost is explained by the other variables.

Root Cause: Labour Cost ≈ Machine Hours × Production Volume

In real manufacturing setups:

  • More production → more labour hours
  • More labour → higher labour cost
  • More machine hours → higher labour usage
  • Overheads also scale up with high production

The model wasn’t seeing four independent variables.
It was seeing multiple versions of the same underlying cost driver.

The result?
Unstable, misleading coefficients that could’ve easily led to wrong business decisions.

4. How the Analysts Fixed the Model

Several solutions are commonly used:

1. Remove the redundant variable
If Labour Cost is strongly explained by Machine Hours, drop one of them.

2. Combine variables
Create engineered features like:

Image (Formula for Labour Per Machine Hours)

3. Use regularization (Ridge or Lasso)

These techniques penalizes the inflated coefficients and thereby stabilizes the model.

After cleaning multicollinearity, the model became:

  • interpretable
  • stable across samples
  • statistically trustworthy
  • business-ready

This represents mature regression practice and not just fitting a model by ensuring that the data reflects reality and not just fancy numbers.

6. The Analyst’s Diagnostic Toolkit

Before trusting any regression model, check:

  • Residual Plot → linearity + variance
  • VIF → multicollinearity
  • QQ Plot → residual normality
  • Durbin-Watson → autocorrelation
  • Cook’s Distance → outliers
  • Scatterplots → always helps

Two minutes of diagnostics can save you hours of bad analysis.

Below is a simple python code snippet for practical learning.
Step 1: Loading the module packages-

import pandas as pd
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor

data = {
'RawMaterial': [100, 120, 130, 150, 170, 190],
'LabourCost': [80, 96, 104, 120, 136, 152],
'MachineHours': [5, 6, 6.5, 7, 7.5, 8],
'Overheads': [50, 55, 60, 65, 70, 75],
'TotalCost': [260, 295, 315, 350, 385, 415],
}
df = pd.DataFrame(data)
df

Step 2: Adding a constant and Fitting OLS Regression Model-

feature_cols = ['RawMaterial', 'LabourCost', 'MachineHours', 'Overheads']
X = df[feature_cols]
y = df['TotalCost']

X_const = sm.add_constant(X)
model = sm.OLS(y, X_const).fit()
print(model.summary())
Image (OLS Regression Summary)

Step 3: Computing VIF (Variance Inflation Factor)

vif_df = pd.DataFrame()
vif_df['Variable'] = feature_cols
vif_df['VIF'] = [
variance_inflation_factor(X.values, i)
for i in range(len(feature_cols))
]

print(vif_df)

Note: The above code is only to demonstrate how to compute VIF. We compute VIF to understand the multicollinearity.

Interpretations of the above OLS Regression Summary:
> R² = 1.000
→ indicates a perfect fit, which is unrealistic and also suggests severe multicollinearity or overly linear data.

> Raw Material & Labour Cost show borderline significance (p ≈ 0.055), but their standard errors are inflated which is another sign of multicollinearity.

> Machine Hours & Overheads have very high standard errors and are statistically insignificant → the model cannot isolate their individual contributions.

> Condition Number = 1.38e+17 → extremely high; confirms that the predictors are strongly correlated (multicollinearity problem).

> Coefficients look unstable (wide confidence intervals), meaning interpretations cannot be trusted for long.

> Durbin–Watson ≈ 2.36 → no autocorrelation issues; the problem is not time-related.

> Residual normality metrics (Omnibus, JB) are unreliable due to very small sample size.

Key Takeaway:-
The regression appears perfect on paper, but the extremely high condition numbers and unstable coefficients show that the model is misleading due to multicollinearity.

Conclusion

When the assumptions of Linear Regression are violated, it is not effective. The model presented in this example appears statistically valid; however, it cannot be relied upon to give valid practical results.
If the data analysis reveals a high level of R2, and well estimated coefficients with all the relevant report details and summary data, but does not include an analysis of possible Multicollinearity, the analyst may be unaware of the extent to which Multicollinearity is distorting the relationship between variables.
By evaluating Variance Inflation Factors, Condition Numbers and Residual patterns, analysts are able to avoid incorrect conclusions while ensuring that their models accurately represent reality and do not result in an incorrect conclusions due to statistical errors.

Thank you for reading my article!!!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.