Exploring Activation Functions, Loss Functions, and Optimization Algorithms
Last Updated on September 18, 2024 by Editorial Team
Author(s): Ali
Originally published on Towards AI.
A Beginner-friendly overview
This member-only story is on us. Upgrade to access all of Medium.
Neural Network -source (author)When building Deep Learning models, activation functions, loss functions, and optimizing algorithms are crucial components that directly impact performance and accuracy.
Without making the right choices, your model will likely output unpredictable results, or not work at all.
If you are new to Deep Learning or have been practicing Deep Learning for quite some time, then this blog is for you.
In this Blog, we will go through all the important activation functions, loss functions, and optimizing algorithms that you will come across.
Additionally, if you have been practicing deep learning for quite a while, then this blog will serve you as a quick lookup on when to choose particular functions.
Please note that we wonβt be deep-diving into mathematical equations, but more of an overview. I will be posting deep dives soon.
As we know, Deep Learning models are made up of perceptron layers (neural networks that have weights).
These weights are first initialized randomly at the start. During the learning process, the Deep learning algorithm tries to learn these weights iteratively.
Neuron -source (author)To learn these weights, there needs to be some signal of whether the model is going in the right… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI