Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

The Gradient Descent Algorithm and its Variants
Tutorials

The Gradient Descent Algorithm and its Variants

Last Updated on October 25, 2022 by Editorial Team

Author(s): Towards AI Editorial Team

 

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Image by Sara fromΒ Pixabay

Gradient Descent Algorithm with Code Examples inΒ Python

Author(s): PratikΒ Shukla

β€œEducating the mind without educating the heart is no education at all.” ― Aristotle

The Gradient Descent Series ofΒ Blogs:

  1. The Gradient Descent Algorithm
  2. Mathematical Intuition behind the Gradient Descent Algorithm
  3. The Gradient Descent Algorithm & its Variants (You areΒ here!)

Table of contents:

  1. Introduction
  2. Batch Gradient DescentΒ (BGD)
  3. Stochastic Gradient DescentΒ (SGD)
  4. Mini-Batch Gradient DescentΒ (MBGD)
  5. Graph Comparison
  6. End Notes
  7. Resources
  8. References

Introduction:

Drumroll, please: Welcome to the finale of the Gradient Descent series! In this blog, we will dive deeper into the gradient descent algorithm. We will discuss all the fun flavors of the gradient descent algorithm along with their code examples in Python. We will also examine the differences between the algorithms based on the number of calculations performed in each algorithm. We’re leaving no stone unturned today, so we request that you run the Google Colab files as you read the document; doing so will give you a more precise understanding of the topic to see it in action. Let’s get intoΒ it!

Batch GradientΒ Descent:

Working of the Batch Gradient Descent (BGD) Algorithm

The Batch Gradient Descent (BGD) algorithm considers all the training examples in each iteration. If the dataset contains a large number of training examples and a large number of features, implementing the Batch Gradient Descent (BGD) algorithm becomes computationally expensiveβ€Šβ€”β€Šso mind your budget! Let’s take an example to understand it in a betterΒ way.

Batch Gradient DescentΒ (BGD):

Number of training examples per iterations = 1 million = 1⁰⁢
Number of iterations = 1000 = 1⁰³
Number of parameters to be trained = 10000 = 1⁰⁴
Total computations = 1⁰⁢ * 1⁰³* 1⁰⁴ = 1⁰¹³

Now, let’s see how the Batch Gradient Descent (BGD) algorithm is implemented.

1. Stepβ€Šβ€”β€Š1:

First, we are downloading the data file from the GitHub repository.

2. Stepβ€Šβ€”β€Š2:

Next, we import some required libraries to read, manipulate, and visualize theΒ data.

3. Stepβ€Šβ€”β€Š3:

Next, we are reading the data file, and then printing the first five rows ofΒ it.

4. Stepβ€Šβ€”β€Š4:

Next, we are dividing the dataset into features and target variables.

Dimensions: X = (200, 3) & Y = (200,Β )

5. Stepβ€Šβ€”β€Š5:

To perform matrix calculations in further steps, we need to reshape the target variable.

Dimensions: X = (200, 3) & Y = (200,Β 1)

6. Stepβ€Šβ€”β€Š6:

Next, we are normalizing theΒ dataset.

Dimensions: X = (200, 3) & Y = (200,Β 1)

7. Stepβ€Šβ€”β€Š7:

Next, we are getting the initial values for the bias and weights matrices. We will use these values in the first iteration while performing forward propagation.

Dimensions: bias = (1, 1) & weights = (1,Β 3)

8. Stepβ€Šβ€”β€Š8:

Next, we perform the forward propagation step. This step is based on the following formula.

Dimensions: predicted_value = (1, 1)+(200, 3)*(3,1) = (1, 1)+(200, 1) = (200,Β 1)

9. Stepβ€Šβ€”β€Š9:

Next, we are going to calculate the cost associated with our prediction. This step is based on the following formula.

Dimensions: cost = scalarΒ value

10. Stepβ€Šβ€”β€Š10:

Next, we update the parameter values of weights and bias using the gradient descent algorithm. This step is based on the following formulas. Please note that the reason why we’re not summing over the values of the weights is that our weight matrix is not a 1*1Β matrix.

Dimensions: db = sum(200, 1) = (1,Β 1)

Dimensions: dw = (1, 200) * (200, 3) = (1,Β 3)

Dimensions: bias = (1, 1) & weights = (1,Β 3)

11. Stepβ€Šβ€”β€Š11:

Next, we are going to use all the functions we just defined to run the gradient descent algorithm. We are also creating an empty list called cost_list to store the cost values of all the iterations. This list will be put to use to plot a graph in furtherΒ steps.

12. Stepβ€Šβ€”β€Š12:

Next, we are actually calling the function to get the final results. Please note that we are running the entire code for 200 iterations. Also, here we have specified the learning rate ofΒ 0.01.

13. Stepβ€Šβ€”β€Š13:

Next, we are plotting the graph of iterations vs.Β cost.

14. Stepβ€Šβ€”β€Š14:

Next, we are printing the final weights values after all the iterations areΒ done.

15. Stepβ€Šβ€”β€Š15:

Next, we print the final bias value after all the iterations areΒ done.

16. Stepβ€Šβ€”β€Š16:

Next, we plot two graphs with different learning rates to see the effect of learning rate in optimization. In the following graph we can see that the graph with a higher learning rate (0.01) converges faster than the graph with a slower learning rate (0.001). As we learned in Part 1 of the Gradient Descent series, this is because the graph with the lower learning rate takes smallerΒ steps.

17. Stepβ€Šβ€”β€Š17:

Let’s put it all together.

Number of Calculations:

Now, let’s count the number of calculations performed in the batch gradient descent algorithm.

Bias: (training examples) x (iterations) x (parameters) = 200 * 200 * 1 =Β 40000

Weights: (training examples) x (iterations) x (parameters) = 200 * 200 *3 =Β 120000

Stochastic GradientΒ Descent

Working of the Stochastic Gradient Descent (SGD) Algorithm

In the batch gradient descent algorithm, we consider all the training examples for all the iterations of the algorithm. But, if our dataset has a large number of training examples and/or features, then it gets computationally expensive to calculate the parameter values. We know our machine learning algorithm will yield more accuracy if we provide it with more training examples. But, as the size of the dataset increases, the computations associated with it also increase. Let’s take an example to understand this in a betterΒ way.

Batch Gradient DescentΒ (BGD)

Number of training examples per iterations = 1 million = 1⁰⁢
Number of iterations = 1000 = 1⁰³
Number of parameters to be trained = 10000 = 1⁰⁴
Total computations = 1⁰⁢*1⁰³*1⁰⁴=1⁰¹³

Now, if we look at the above number, it does not give us excellent vibes! So we can say that using the Batch Gradient Descent algorithm does not seem efficient. So, to deal with this problem, we use the Stochastic Gradient Descent (SGD) algorithm. The word β€œStochastic” means random. So, instead of performing calculation on all the training examples of a dataset, we take one random example and perform the calculations on that. Sounds interesting, doesn’t it? We just consider one training example per iteration in the Stochastic Gradient Descent (SGD) algorithm. Let’s see how effective Stochastic Gradient Descent is based on its calculations.

Stochastic Gradient DescentΒ (SGD):

Number of training examples per iterations = 1
Number of iterations = 1000 = 1⁰³
Number of parameters to be trained = 10000 = 1⁰⁴
Total computations = 1 * 1⁰³*1⁰⁴=1⁰⁷

Comparison with Batch GradientΒ Descent:

Total computations in BGD = 1⁰¹³
Total computations in SGD = 1⁰⁷
Evaluation: SGD is ¹⁰⁢ times faster than BGD in this example.

Note: Please be aware that our cost function might not necessarily go down as we just take one random training example every iteration, so don’t worry. However, the cost function will gradually decrease as we perform more and more iterations.

Now, let’s see how the Stochastic Gradient Descent (SGD) algorithm is implemented.

1. Stepβ€Šβ€”β€Š1:

First, we are downloading the data file from the GitHub repository.

2. Stepβ€Šβ€”β€Š2:

Next, we are importing some required libraries to read, manipulate, and visualize theΒ data.

3. Stepβ€Šβ€”β€Š3:

Next, we are reading the data file, and then printing the first five rows ofΒ it.

4. Stepβ€Šβ€”β€Š4:

Next, we are dividing the dataset into features and target variables.

Dimensions: X = (200, 3) & Y = (200,Β )

5. Stepβ€Šβ€”β€Š5:

To perform matrix calculations in further steps, we need to reshape the target variable.

Dimensions: X = (200, 3) & Y = (200,Β 1)

6. Stepβ€Šβ€”β€Š6:

Next, we are normalizing theΒ dataset.

Dimensions: X = (200, 3) & Y = (200,Β 1)

7. Stepβ€Šβ€”β€Š7:

Next, we are getting the initial values for the bias and weights matrices. We will use these values in the first iteration while performing forward propagation.

Dimensions: bias = (1, 1) & weights = (1,Β 3)

8. Stepβ€Šβ€”β€Š8:

Next, we perform the forward propagation step. This step is based on the following formula.

Dimensions: predicted_value = (1, 1)+(200, 3)*(3,1) = (1, 1)+(200, 1) = (200,Β 1)

9. Stepβ€Šβ€”β€Š9:

Next, we’ll calculate the cost associated to our prediction. The formula used for this step is as follows. Because there will only be one value of the error, we won’t need to divide the cost function by the size of the dataset or add up all the costΒ values.

Dimensions: cost = scalarΒ value

10. Stepβ€Šβ€”β€Š10:

Next, we update the parameter values of weights and bias using the gradient descent algorithm. This step is based on the following formulas. Please note that the reason why we are not summing over the values of the weights is that our weight matrix is not a 1*1 matrix. Also, in this case, since we have only one training example, we won’t need to perform the summation over all the examples. The updated formula is given asΒ follows.

Dimensions: db = (1,Β 1)

Dimensions: dw = (1, 200) * (200, 3) = (1,Β 3)

Dimensions: bias = (1, 1) & weights = (1,Β 3)

11. Stepβ€Šβ€”β€Š11:

12. Stepβ€Šβ€”β€Š12:

Next, we are actually calling the function to get the final results. Please note that we are running the entire code for 200 iterations. Also, here we have specified the learning rate ofΒ 0.01.

13. Stepβ€Šβ€”β€Š13:

Next, we print the final weights values after all the iterations areΒ done.

14. Stepβ€Šβ€”β€Š14:

Next, we print the final bias value after all the iterations areΒ done.

15. Stepβ€Šβ€”β€Š15:

Next, we are plotting the graph of iterations vs.Β cost.

16. Stepβ€Šβ€”β€Š16:

Next, we plot two graphs with different learning rates to see the effect of learning rate in optimization. In the following graph we can see that the graph with a higher learning rate (0.01) converges faster than the graph with a slower learning rate (0.001). Again, we know this because the graph with a lower learning rate takes smallerΒ steps.

17. Stepβ€Šβ€”β€Š17:

Putting it all together.

Calculations:

Now, let’s count the number of calculations performed in implementing the batch gradient descent algorithm.

Bias: (training examples) x (iterations) x (parameters) = 1* 200 * 1 =Β 200

Weights: (training examples) x (iterations) x (parameters) = 1* 200 *3 =Β 600

Mini-Batch Gradient Descent Algorithm:

Working of the Mini-Batch Gradient Descent (MBGD) Algorithm

In the Batch Gradient Descent (BGD) algorithm, we consider all the training examples for all the iterations of the algorithm. However, in the Stochastic Gradient Descent (SGD) algorithm, we only consider one random training example. Now, in the Mini-Batch Gradient Descent (MBGD) algorithm, we consider a random subset of training examples in each iteration. Since this is not as random as SGD, we reach closer to the global minimum. However, MBGD is susceptible to getting stuck into local minima. Let’s take an example to understand this in a betterΒ way.

Batch Gradient DescentΒ (BGD):

Number of training examples per iterations = 1 million = 1⁰⁢
Number of iterations = 1000 = 1⁰³
Number of parameters to be trained = 10000 = 1⁰⁴
Total computations = 1⁰⁢*1⁰³*1⁰⁴=1⁰¹³

Stochastic Gradient DescentΒ (SGD):

Number of training examples per iterations = 1
Number of iterations = 1000 = 1⁰³
Number of parameters to be trained = 10000 = 1⁰⁴
Total computations = 1*1⁰³*1⁰⁴ = 1⁰⁷

Mini Batch Gradient DescentΒ (MBGD):

Number of training examples per iterations = 100 = 1⁰²
β†’Here, we are considering 1⁰² training examples out of 1⁰⁢.
Number of iterations = 1000 = 1⁰³
Number of parameters to be trained = 10000 = 1⁰⁴
Total computations = 1⁰²*1⁰³*1⁰⁴=1⁰⁹

Comparison with Batch Gradient DescentΒ (BGD):

Total computations in BGD = 1⁰¹³
Total computations in MBGD = 1⁰⁹

Evaluation: MBGD is 1⁰⁴ times faster than BGD in this example.

Comparison with Stochastic Gradient DescentΒ (SGD):

Total computations in SGD = 1⁰⁷
Total computations in MBGD = 1⁰⁹

Evaluation: SGD is 1⁰² times faster than MBGD in this example.

Comparison of BGD, SGD, andΒ MBGD:

Total computations in BGD= 1⁰¹³
Total computations in SGD= 1⁰⁷
Total computations in MBGD = 1⁰⁹

Evaluation: SGD > MBGD >Β BGD

Note: Please be aware that our cost function might not necessarily go down as we are taking a random sample of the training examples every iteration. However, the cost function will gradually decrease as we perform more and more iterations.

Now, let’s see how the Mini-Batch Gradient Descent (MBGD) algorithm is implemented in practice.

1. Stepβ€Šβ€”β€Š1:

First, we are downloading the data file from the GitHub repository.

2. Stepβ€Šβ€”β€Š2:

Next, we are importing some required libraries to read, manipulate, and visualize theΒ data.

3. Stepβ€Šβ€”β€Š3:

Next, we are reading the data file, and then print the first five rows ofΒ it.

4. Stepβ€Šβ€”β€Š4:

Next, we are dividing the dataset into features and target variables.

Dimensions: X = (200, 3) & Y = (200,Β )

5. Stepβ€Šβ€”β€Š5:

To perform matrix calculations in further steps, we need to reshape the target variable.

Dimensions: X = (200, 3) & Y = (200,Β 1)

6. Stepβ€Šβ€”β€Š6:

Next, we are normalizing theΒ dataset.

Dimensions: X = (200, 3) & Y = (200,Β 1)

7. Stepβ€Šβ€”β€Š7:

Next, we are getting the initial values for the bias and weights matrices. We will use these values in the first iteration while performing forward propagation.

Dimensions: bias = (1, 1) & weights = (1,Β 3)

8. Stepβ€Šβ€”β€Š8:

Next, we are performing the forward propagation step. This step is based on the following formula.

Dimensions: predicted_value = (1, 1)+(200, 3)*(3,1) = (1, 1)+(200, 1) = (200,Β 1)

9. Stepβ€Šβ€”β€Š9:

Next, we are going to calculate the cost associated with our prediction. This step is based on the following formula.

Dimensions: cost = scalarΒ value

10. Stepβ€Šβ€”β€Š10:

Next, we update the parameter values of weights and bias using the gradient descent algorithm. This step is based on the following formulas. Please note that the reason why we are not summing over the values of the weights is that our weight matrix is not a 1*1Β matrix.

Dimensions: db = sum(200, 1) = (1Β ,Β 1)

Dimensions: dw = (1, 200) * (200, 3) = (1,Β 3)

Dimensions: bias = (1, 1) & weights = (1,Β 3)

11. Stepβ€Šβ€”β€Š11:

Next, we are going to use all the functions we just defined to run the gradient descent algorithm. Also, we are creating an empty list called cost_list to store the cost values of all the iterations. We will use this list to plot a graph in furtherΒ steps.

12. Stepβ€Šβ€”β€Š12:

Next, we are actually calling the function to get the final results. Please note that we are running the entire code for 200 iterations. Also, here we have specified the learning rate ofΒ 0.01.

13. Stepβ€Šβ€”β€Š13:

Next, we print the final weights values after all the iterations areΒ done.

14. Stepβ€Šβ€”β€Š14:

Next, we print the final bias value after all the iterations areΒ done.

15. Stepβ€Šβ€”β€Š15:

Next, we are plotting the graph of iterations vs.Β cost.

16. Stepβ€Šβ€”β€Š16:

Next, we plot two graphs with different learning rates to see the effect of learning rate in optimization. In the following graph we can see that the graph with a higher learning rate (0.01) converges faster than the graph with a slower learning rate (0.001). The reason behind it is that the graph with lower learning rate takes smallerΒ steps.

17. Stepβ€Šβ€”β€Š17:

Putting it all together.

Calculations:

Now, let’s count the number of calculations performed in implementing the batch gradient descent algorithm.

Bias: (training examples) x (iterations) x (parameters) = 20 * 200 * 1 =Β 4000

Weights: (training examples) x (iterations) x (parameters) = 20 * 200 *3 =Β 12000

Graph comparisons:

Comparison of Batch, Stochastic, and Mini Batch Gradient Descent Algorithm

End Notes:

And just like that, we’re at the end of the Gradient Descent series! In this installment, we went deep into the code to look at how three of the major types of gradient descent algorithms perform next to each other, summed up by these handyΒ notes:

1. Batch Gradient Descent
Accuracy β†’ High
Time β†’Β More

2. Stochastic Gradient Descent
Accuracy β†’ Low
Time β†’Β Less

3. Mini-Batch Gradient Descent
Accuracy β†’ Moderate
Time β†’Β Moderate

We hope you enjoyed this series and learned something new, no matter your starting point or machine learning background. Knowing this essential algorithm and its variants will likely prove valuable as you continue on your AI journey and understand more about both the technical and grand aspects of this incredible technology. Keep an eye out for other blogs offering, even more, machine learning lessons, and stayΒ curious!

Buy Pratik aΒ Coffee!

Resources:

  1. Batch Gradient Descentβ€Šβ€”β€ŠGoogle Colab,Β GitHub
  2. Stochastic Gradient Descentβ€Šβ€”β€ŠGoogle Colab,Β GitHub
  3. Mini Batch Gradient Descentβ€Šβ€”β€ŠGoogle Colab,Β GitHub

Citation:

For attribution in academic contexts, please cite this workΒ as:

Shukla, et al., β€œThe Gradient Descent Algorithm & its Variants”, Towards AI, 2022

BibTex Citation:

@article{pratik_2022, 
 title={The Gradient Descent Algorithm & its Variants}, 
 url={https://towardsai.net/neural-networks-with-python}, 
 journal={Towards AI}, 
 publisher={Towards AI Co.}, 
 author={Pratik, Shukla},
 editor={Lauren, Keegan},  
 year={2022}, 
 month={Oct}
}

References:

  1. Gradient descentβ€Šβ€”β€ŠWikipedia


The Gradient Descent Algorithm and its Variants was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓