Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Parametric ReLU | SELU | Activation Functions Part 2
Latest   Machine Learning

Parametric ReLU | SELU | Activation Functions Part 2

Last Updated on July 17, 2023 by Editorial Team

Author(s): Shubham Koli

Originally published on Towards AI.

Parametric ReLU U+007C SELU U+007C Activation Functions Part 2

What is Parametric ReLU ?

Rectified Linear Unit (ReLU) is an activation function in neural networks. It is a popular choice among developers and researchers because it tackles the vanishing gradient problem. A problem with ReLU is that it returns zero for any negative value input. So, if a neuron provides negative input, it gets stuck and always outputs zero. Such a neuron is considered dead. Therefore, using ReLU may lead to a significant portion of the neural network doing nothing.

Note: You can learn more about this behavior of ReLU here.

Researchers have proposed multiple solutions to this problem. Some of them are mentioned below:

  • Leaky ReLU
  • Parametric ReLU
  • ELU
  • SELU

In this Answer, we discuss Parametric ReLU.

Parametric ReLU

The mathematical representation of Parametric ReLU is as follows:

Here, yi ​is the input from the i th layer input to the activation function. Every layer learns the same slope parameter denoted as Ξ±i​. In the case of CNN, i represents the number of channels. Learning the parameter, Ξ±i ​boosts the model’s accuracy without the additional computational overhead.

Note: When Ξ±i ​is equal to zero, the function f behaves like ReLU. Whereas, when Ξ±i ​is equal to a small number (such as 0.01), the function f behaves like Leaky ReLU.

The above equation can also be represented as follows:

f (yi​) = max (0, yi​) + Ξ±i ​min (0, yi​)

Using Parametric ReLU does not burden the learning of the neural network. This is because the number of extra parameters to learn is equal to the number of channels. This is relatively small compared to the number of weights the model needs to learn. Parametric ReLU gives a considerable rise in the accuracy of a model, unlike Leaky ReLU.

If the coefficient Ξ±i ​is shared among different channels, we can denote it with a Ξ±.

f (yi​) = max (0, yi​) + Ξ± min (0, yi​)

Parametric ReLU vs. Leaky ReLU

In this section, we compare Parametric ReLU with the performance of Leaky ReLU.

Leaky ReLU vs Parametric ReLU

Here, we plot Leaky ReLU with Ξ± = 0.01 and have Parametric ReLU with Ξ± = 0.05. In practice, this parameter is learned by the neural network and changes accordingly.

Implementation with Python

import numpy as np
def PReLU(z,Ξ±) :
fn =np.max(Ξ±z,z)
return(fn)

Advantages:

1. Increase inaccuracy of the model and faster convergence when compared with the model having LReLU and ReLU.

Disadvantages:

1. The user has to manually modify the parameter Ξ± by trial and error.

2. For different applications, different Ξ± would be required, finding which is time-consuming

3. For every negative input, the gradient remains the same irrespective of the magnitude. This implies during backpropagation, learning occurs equally for the whole range of negative inputs.

What is SELU?

SELU is a self-normalizing activation function. It is a variant of the ELU . The main advantage of SELU is that we can be sure that the output will always be standardized due to its self-normalizing behavior. That means there is no need to include Batch-Normalization layers.

SELU

Where Ξ» and Ξ± are constants with values:

Ξ» β‰ˆ 1.0505

Ξ± β‰ˆ 1.6732

Implementation with Python

# Implementation of SELU in Python
import numpy as np
import matplotlib.pyplot as plt

# initializing the constants
Ξ» = 1.0507
Ξ± = 1.6732

def SELU(x):
if x > 0:
return Ξ»*x
return Ξ»*Ξ±*(np.exp(x) - 1)

x = np.linspace(-5.0, 5.0)
result = []
for i in x:
result.append(SELU(i))

plt.plot(x, result)
plt.title("SELU activation function")
plt.xlabel("Input")
plt.ylabel("Output")
plt.grid(True)
plt.savefig('output/selu_plot.png')

What is normalization?

SELU is known to be a self-normalizing function, but what is normalization?

Normalization is a data preparation technique that involves changing the values of numeric columns in a dataset to a common scale. This is usually used when the attributes of the dataset have different ranges.

There is 3 types of normalization:

  1. Input normalization: One example is scaling the pixel values of grey-scale photographs (0–255) to values between zero and one
  2. Batch normalization: Values are changed between each layer of the network so that their mean is zero and their standard deviation is one.
  3. Internal normalization: this is where SELU’s magic happens. The key idea is that each layer keeps the previous layer’s mean and variance.

Advantages of SELU

  1. Like ReLU, SELU does not have vanishing gradient problem and hence, is used in deep neural networks.
  2. Compared to ReLUs, SELUs cannot die.
  3. SELUs learn faster and better than other activation functions without needing further procession. Moreover, other activation function combined with batch normalization cannot compete with SELUs.

Disadvantages of SELU

  1. SELU is a relatively new activation function so it is not yet used widely in practice. ReLU stays as the preferred option.
  2. More research on architectures such as CNNs and RNNs using SELUs is needed for wide-spread industry use.

β€œActivation Functions” in Deep learning models. How to Choose?

Sigmoid, tanh, Softmax, ReLU, Leaky ReLU EXPLAINED !!!

medium.com

The Dying ReLU Problem, Causes and Solutions

Keep your neural network alive by understanding the downsides of ReLU

medium.com

If you liked this Blog, leave your thoughts and feedback in the comments section, See you again in the next interesting read!

U+1F600 Happy Learning! U+1F44F

Until Next Time, Take care!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓