Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Gradient Descent Algorithm Explained
Latest   Machine Learning

Gradient Descent Algorithm Explained

Last Updated on July 24, 2023 by Editorial Team

Author(s): Pratik Shukla

Originally published on Towards AI.

Machine Learning

With Step-By-Step Mathematical Derivation

Source: Unsplash

Index:

  • Basics Of Gradient Descent.
  • Basic Rules Of Derivation.
  • Gradient Descent With One Variable.
  • Gradient Descent With Two Variables.
  • Gradient Descent For Mean Squared Error Function.

What is Gradient Descent?

Gradient Descent is a machine learning algorithm that operates iteratively to find the optimal values for its parameters. It takes into account, user-defined learning rate, and initial parameter values.

How does it work?

  • Start with initial values.
  • Calculate cost.
  • Update values using the update function.
  • Returns minimized cost for our cost function

Why do we need it?

Generally, what we do is, we find the formula that gives us the optimal values for our parameter. But in this algorithm, it finds the value by itself! Interesting, isn’t it?

Formula:

Gradient Descent Formula

Some Basic Rules For Derivation:

( A ) Scalar Multiple Rule:

Source: Image created by the author.

( B ) Sum Rule:

Source: Image created by the author.

( C ) Power Rule:

Source: Image created by the author.

( D ) Chain Rule:

Source: Image created by the author.

Let’s have a look at various examples to understand it better.

Gradient Descent Minimization β€” Single Variable:

We’re going to be using gradient descent to find ΞΈ that minimizes the cost. But let’s forget the Mean Squared Error (MSE) cost function for a moment and take a look at gradient descent function in general.

Now what we generally do is, find the best value of our parameters using some sort of simplification and make a function that will give us minimized cost. But here what we’ll do is take some default or random for our parameters and let our program run iteratively to find the minimized cost.

Let’s Explore It In-Depth:

Let’s take a very simple function to begin with: J(ΞΈ) = ΞΈΒ², and our goal is to find the value of ΞΈ which minimizes J(ΞΈ).

From our cost function, we can clearly say that it will be minimum for ΞΈ= 0, but it won’t be so easy to derive such conclusions while working with some complex functions.

( A ) Cost function: We’ll try to minimize the value of this function.

Source: Image created by the author.

( B ) Goal: To minimize the cost function.

Source: Image created by the author.

( C ) Update Function: Initially we take a random number for our parameters, which are not optimal. To make it optimal we have to update it at each iteration. This function takes care of it.

Source: Image created by the author.

( D ) Learning rate: The descent speed.

Source: Image created by the author.

( E ) Updating Parameters:

Source: Image created by the author.

( F ) Table Generation:

Here we are stating with ΞΈ = 5.

keep in mind that here ΞΈ = 0.8*ΞΈ, for our learning rate and cost function.

Source: Image created by the author.
Source: Image created by the author.

Here we can see that as our ΞΈ is decreasing the cost value is also decreasing. We just have to find the optimal value for it. To find the optimal value we have to do perform many iterations. The more the iterations, the more optimal value we get!

( G ) Graph: We can plot the graph of the above points.

Source: Image created by the author.

Cost Function Derivative:

Why does the gradient descent use the derivative of the cost function? We want our cost function to be minimum, right? Minimizing the cost function simply gives us a lower error rate in predicting values. Ideally, we take the derivative of a function to 0 and find the parameters. Here we do the same thing but we start from a random number and try to minimize it iteratively.

The learning rate / ALPHA:

The learning rate gives us solid control over how large of steps we make. Selecting the right learning rate is a very critical task. If the learning rate is too high then you might overstep the minimum and diverge. For example, in the above example if we take alpha =2 then each iteration will take us away from the minimum. So we use small alpha values. But the only concern with using a small learning rate is we have to perform more iteration to reach the minimum cost value, this increases training time.

Convergence / Stopping gradient descent:

Note that in the above example the gradient descent will never actually converge to a minimum of theta= 0. Methods for deciding when to stop our iterations are beyond my level of expertise. But I can tell you that while doing assignments we can take a fixed number of iterations like 100 or 1000.

Gradient Descent β€” Multiple Variables:

Our ultimate goal is to find the parameters for MSE function, which includes multiple variables. So here we will discuss a cost function which as 2 variables. Understanding this will help us very much in our MSE Cost function.

Let’s take this function:

Source: Image created by the author.

When there are multiple variables in the minimization objective, we have to define separate rules for update function. With more than one parameter in our cost function, we have to use partial derivative. Here I simplified the partial derivative process. Let’s have a look at this.

( A ) Cost Function:

Source: Image created by the author.

( B ) Goal:

Source: Image created by the author.

( C ) Update Rules:

Source: Image created by the author.

( D ) Derivatives:

Source: Image created by the author.
Source: Image created by the author.

( E ) Update Values:

Source: Image created by the author.

( F ) Learning Rate:

Source: Image created by the author.

( G ) Table:

Starting with ΞΈ1 =1 ,ΞΈ2 =1. And then updating the value using update functions.

Source: Image created by the author.

( H ) Graph:

Source: Image created by the author.

Here we can see that as we increase our number of iterations, our cost value is going down.

Note that while implementing the program in python the new values must not be updated until we find new values for both ΞΈ1 and ΞΈ2. We clearly don’t want to use the new value of ΞΈ1 to be used in the old value of ΞΈ2.

Gradient Descent For Mean Squared Error:

Now that we know how to perform gradient descent on an equation with multiple variables, we can return to looking at gradient descent on our MSE cost function.

Let’s get started!

( A ) Hypothesis function:

Source: Image created by the author.

( B ) cost function:

Source: Image created by the author.

( C ) Find partial derivative of J(ΞΈ0,ΞΈ1) w.r.t to ΞΈ1:

Source: Image created by the author.

( D ) Simplify a little:

Source: Image created by the author.

( E ) Define a variable u:

Source: Image created by the author.

( F ) Value of u:

Source: Image created by the author.

( G ) Finding partial derivative:

Source: Image created by the author.
Source: Image created by the author.

( H ) Rewriting the equations:

Source: Image created by the author.

( I ) Merge all the calculated data:

Source: Image created by the author.

( J ) Repeat the same process for derivation of J(ΞΈ0,ΞΈ1) w.r.t ΞΈ1:

Source: Image created by the author.

( K ) Simplified calculations:

Source: Image created by the author.

( L ) Combine all calculated Data:

Source: Image created by the author.

One Half Mean Squared Error :

We multiply our MSE cost function with 1/2, so that when we take the derivative the 2s cancel out. Multiplying the cost function by a scalar does not affect the location of the minimum, so we can get away with this.

Final :

( A ) Cost Function: One Half Mean Squared Error:

Source: Image created by the author.

( B ) Goal:

Source: Image created by the author.

( C ) Update Rule:

Source: Image created by the author.

( D ) Derivatives:

Source: Image created by the author.
Source: Image created by the author.

So, that’s it. We finally made it!

Conclusion:

We are going to use the same method in various applications of machine learning algorithms. But at that time we are not going to go in this depth, we’re just going to use the final formula. But it’s always good to know how it’s derived!

Final Formula:

Gradient Descent Formula

Is the concept lucid to you now? Please let me know by writing responses. If you enjoyed this article then hit the clap icon.

If you have any additional confusions, feel free to contact me. [email protected]

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓