Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Deep Dive Into Neural Networks
Latest

Deep Dive Into Neural Networks

Last Updated on March 7, 2021 by Editorial Team

Author(s): Amit Griyaghey

Deep Learning

Photo by Drew Graham onΒ Unsplash

Apart from well-known Applications like image and voice recognition Neural Networks are being used in several contexts to find complex patterns among very large data sets, an example is when an E-mail engine is suggesting sentence completion or when a machine translating one language to another. For solving such complex problems we use Artificial NeuralΒ Network.

In this article, Topics to be covered:-

  • Introduction to neuralΒ nets
  • Purpose of using neuralΒ network
  • Neural network architecture
  • Evaluation ofΒ neurons
  • Activation function it’s few types explained
  • Backpropagation for estimating weights andΒ biases

Introduction

An Artificial Neural Network(ANN) is often called a black box technique because it can be sometimes hard to understand what they are doing. It is a sequence of mathematical calculations which are best visualized through neural networks.

Source: Wikimedia Commons, licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

ANN is vaguely inspired by Biological Neural Network that constitutes the human brain. The human brain uses a network of interconnected cells called neurons to provide learning capabilities likewise ANN uses a network of artificial neurons or nodes to solve challenging learning problems.

Source: Wikimedia Commons, licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
Source: Wikimedia Commons, licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

The Human brain’s neural network showing information intake and out throughΒ neurons

Artificial NeuralΒ Network

Why learn NeuralΒ Network?

  • Ability to learn– Neural Networks can perform their function on theirΒ own.
  • Ability to generalize-It can easily produce output for the inputs it has not been taught how to dealΒ with.
  • Adaptivityβ€Šβ€”β€ŠIt can be easily retained to changing Environment conditions.

Neural network Architecture

The Neural network consists of three layers shown in the aboveΒ figure-

  • Input layer
  • Hidden layer
  • Output layer

Neural networks in practice have more than one input mode and also generally have more than one output node in between these two nodes there a spider web of connections between each layer of nodes. These layers are hidden layers. when we build a Neural Network the first thing to do is to decide how many hidden layers weΒ want.

Source: Image by theΒ Author

A neuron is an information-processing unit that is fundamental to the operation of a neural network. Three basic elements of the neuronΒ model:

β€’Synaptic weights

β€’Combination (Addition)function

  • Activation function

External input bias to increase or lower the net input to the Activation function

Evaluation ofΒ neuron

Source: Image by theΒ Author

β€’w-weights

β€’n –number ofΒ inputs

β€’xi –input

β€’f(x) –activation function

  • y(x) –output

This Evaluation can be understood by a simple example lets say in the input node we have a value of x1 we are passing this value from the input node to the hidden layer through synapses of the network each synapse or connection have some synaptic weights i.e, w. The input value x1 is multiplied by the respective synaptic weight and it is imputed into the hidden layer node, in this hidden layer node activation function f(x) do their work. now let’s understand what activation functionΒ is-

Activation Function-

At the time of inputs in the form of Linear combination, we need to apply some function for optimization purposes. This function is called the Activation function to achieve the desired output. When we build a neural network, we have to decide which Activation function we want to use. There are types of Activation function which can beΒ used-

  • RELU(Rectified Linear Unit)– This Returns zero for negative inputs and remains silent for all positive values uses the function- y=max(z,0). z is the weighted input received in the hidden layerΒ node.
Source: Image by theΒ Author

Sigmoid function- The output values can range anywhere between zero and one. Evaluated byβ€Šβ€”β€Šf(x) =1/1+e^z.

Source: Image by theΒ Author
  • Hyperbolic Tangent Functionβ€Šβ€”β€Šalso called tanh function, it is a shifted version of sigmoid function it has a wide range of outputs. which can range from -1Β to+1
Source: Image by theΒ Author

Among all these activation functions, RELU is the most used activation function and the main motive behind is to transform the inputs into valuable outputΒ unit.

Back Propagation

Neural networks start with identical Activation functions, but using different weights and biases on the connections, it flips and stretches the activation function into a new shape that is shifted to estimate the weights and the biases. Backpropagation uses two different methods to estimate the parameters of weights and biases in the neuralΒ net-

  1. use chain rule to calculate derivatives or
  2. plugging the derivatives into gradient descent to optimize parameters.

Uses of NeuralΒ network

  • Used in voice recognition
  • Facial recognition
  • Fraud detection
  • Sentiment Analysis
  • Image search in socialΒ media

Conclusion

This article has covered the concepts of an artificial neural network, its function types, uses, and how it estimates its parametric values. also, it explained in brief how it is similar to a human brain neuralΒ system.

Thanks forΒ reading!

References

  1. https://en.wikipedia.org/wiki/Artificial_neural_network
  2. https://en.wikipedia.org/wiki/Activation_function
  3. https://www.sas.com/en_in/insights/analytics/neural-networks.html


Deep Dive Into Neural Networks was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓