The Normal equation: the calculus, the algebra and the code
Last Updated on February 4, 2023 by Editorial Team
Author(s): Menzliskander
Originally published on Towards AI.
The Normal Equation: The Calculus, the Algebra, and TheΒ Code
Introduction:
The normal equation is a closed-form solution used to solve linear regression problems. It allows us to directly compute the optimal parameters of the line (hyperplane) that best fits ourΒ data.
In this article, weβll demonstrate the normal equation using a calculus approach and linear algebra one, then implement it in python but first, letβs recap linear regression.
Linear Regression:
letβs say we have data points xΒΉ,xΒ²,xΒ³,β¦ where each point has kΒ features
and each data point has a target valueΒ yα΅’.
the goal of linear regression is to find parameters ΞΈβ, ΞΈβ, ΞΈβ,β¦,ΞΈk that form a relation between each data point and itβs target valueΒ yα΅’
so weβre trying to solve this system of equations:
putting it all in matrix form, we get: XΞΈ=yΒ with:
Now the problem is, in most cases, this system is not solvable. We canβt fit a straight line through theΒ data
And this is where the normal equations will step in to find the best approximate solution, Practically the normal equation will find the parameter vector ΞΈ that solves the equation XΞΈ=yΜ where yΜ are as close as possible to our original targetΒ values.
and here is the normal equation:
how did we get there?? Well, there are 2 ways to explainΒ it
Calculus:
As we said earlier we are trying to find the parameters ΞΈ so that our predictions yΜ = XΞΈ is as close as possible to our original y.So we want to minimize the distance between them i.e., minimize ||y-yΜ|| and thatβs the same as minimizing ||y-yΜ||Β² (view graphΒ below)
now all we have to do is solve this minimization problem first, letβs expand itΒ :
note: XΞΈ and y are vectors, so we can change the order when weΒ multiply
now to find the minimum, we will derive with respect to ΞΈ and set it toΒ 0
and thatβs how we arrive at the normal equation. Now there is another approach that will get usΒ there.
Linear Algebra:
Again our equation is XΞΈ=y, knowing a bit of matrix multiplication, we know that the result of multiplying a vector by a matrix is the linear combination of the matrix columnβs multiplied by the vectorβs components, so we can write asΒ :
so for this system to have a solution, y needs to be in the column space of X (noted C(X)). And since thatβs usually not the case we have to settle for the next best thing which is solving it for the closest approximation of y inΒ C(X).
and thatβs just the projection of y into C(X)Β !! (view imageΒ below)
yΜ is the projection of y unto C(X) so we can write as yΜ =XΞΈΒ X^T
e = yβ yΜ and since itβs orthogonal to C(X)Β , X^T multiplied by e is equal toΒ 0
now putting all this to together:
as we can see we get the same exactΒ result!
Code:
Now implementing this in python is fairly straightforward
First, weβll create someΒ data:
import numpy as np
import matplotlib.pyplot as plt
X=3*np.random.rand(100,1)
#generating the labels using the function y=2X+3+gaussian noise
Y=2*X+3+np.random.randn(100,1)
#displaying the data
plt.scatter(X,Y)
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Random data(y=2X+3+gaussian noise)')
plt.show()
#adding ones column for the bias term
X1 = np.c_[np.ones((100,1)),X]
#applying the normal equation:
theta = np.linalg.inv(X1.T.dot(X1)).dot(X1.T).dot(Y)
#we find that theta is equal to :array([[2.78609912],[2.03156946]))
#the actual function we used is y=3+2x+ gaussian noise
#so our approximation is pretty good
Now all thatβs left is to use our ΞΈ parameters to make predictions:
Y_predict=X1.dot(theta_best)
plt.plot(X,Y,"b.")
plt.plot(X,Y_predict,"r-",label="predictions")
plt.legend()
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Random data(y=2X+3+gaussian noise)')
plt.show()
Conclusion:
As we saw the normal equation is pretty straightforward and easy to use to directly get the optimal parameters however it is not commonly used on large datasets because it involves computing the inverse of the matrix which is computationally expensive (takes O(nΒ³) time complexity) thatβs why an iterative approach like gradient descent is preferred
The Normal equation: the calculus, the algebra and the code was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI