Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

26 Words About Neural Networks, Every AI-Savvy Leader Must Know
Artificial Intelligence

26 Words About Neural Networks, Every AI-Savvy Leader Must Know

Last Updated on May 22, 2020 by Editorial Team

Author(s): Yannique Hecht

Artificial Intelligence

Think you can explain these? Put your knowledge to theΒ test!

Source: NeedPix

[This is the 6th part of a series. Make sure you read about Search, Knowledge, Uncertainty, Optimization, and Machine Learning before continuing. The next topic is Language.]

Neural Networks

The best-performing AI applications have one thing in common: They are built around artificial neural networks. These human brain-inspired computing models gave rise to the recently popular deep learning techniques.

These two concepts are nothing new; in fact, they have been around for over 70 years [for more information, check out Jaspreet’s Concise History of Neural Networks].

Only since recently, have we been able to run such complex mathematical computations effectively through much improved and cheaper computing power.

But what exactly is the difference between human and artificial neural networks?
And, can we make computers think likeΒ us?

ai

To help you answer these questions, this article briefly defines and explains the main concepts and terms around the field of neural networks.

Neural Networks

neural networks: A biological neural network, made up of actual biological neurons

Neural Network
Neural Network

Neuron: A nerve cell that communicates with other cells via specialized connections

Artificial neural network: A computing system somewhat inspired by human neural networks, which β€˜learns’ to perform tasks without being programmed with task-specific rules and where connections of the neurons are modeled asΒ weights

Artificial Neural Network
Artificial NeuralΒ Network

Step function: A function that increases or decreases abruptly from one constant value to another, forΒ example:

g(x) = 1 if x β‰₯ 0, else 0
Step Function
Step Function

Logistic sigmoid: A mathematical function having a characteristic β€œS”-shaped curve or sigmoid curve, forΒ example:

g(x) = e[x] / (e[x] +1)
Logistic Sigmoid
Logistic Sigmoid

Rectified linear unit (ReLU): An activation function, often applied in computer vision, speech recognition & deep neural nets, for example:

g(x) = max(0, x)
Rectified Linear Unit
Rectified Linear Unit

[For more details, check out Danqing Liu’s Practical Guide toΒ ReLU]

Gradient descent: An algorithm for minimizing loss when training a neuralΒ network

Stochastic gradient descent: An iterative method for optimizing an objective function with suitable smoothness properties

Mini-batch gradient descent: A variation of the gradient descent algorithm, splitting the training dataset into small batches, to calculate model error and update model coefficients

Perceptron: A learning algorithm for supervised learning of binary classifiers, or: a single-layer neural network consisting only of input values, weights and biases, net sum, and an activation function

Multilayer neural network: An artificial neural network with an input layer, an output layer, and at least one hiddenΒ layer

Multilayer Neural Network
Multilayer NeuralΒ Network

Backpropagation: An algorithm for training neural networks with hiddenΒ layers

Deep neural networks: A neural network with multiple hiddenΒ layers

Deep Neural Network
Deep NeuralΒ Network

Dropout: Temporarily removing unitsβ€Šβ€”β€Šselected at randomβ€Šβ€”β€Šfrom a neural network to prevent over-reliance on certainΒ units

Dropout
Dropout

Computer vision: Computational methods for analyzing and understanding digitalΒ images

Tensorflow: An open-source framework by Google to run machine learning, deep learning, and analytics tasks

[TensorFlow’s previous Medium blog has moved and is now locatedΒ here…]

Image convolution: Applying a filter that adds each pixel value of an image to its neighbors, weighted according to a kernelΒ matrix

Pooling: Reducing the size of input by sampling from regions in theΒ input

Max-pooling: Pooling by choosing the maximum value in eachΒ region

Max-Pooling
Max-Pooling

Convolutional neural network: a neural network that uses convolution, usually for analyzing images

Convolutional Neural Network
Convolutional NeuralΒ Network

Feed-forward neural network: A neural network that has connections only in one direction

Recurrent neural network: A neural network that generates output that feeds back into its ownΒ inputs

Recurrent Neural Network
Recurrent NeuralΒ Network

Now that you’re able to explain the most essential terms around neural networks, you’re ready to follow this rabbit holeΒ further.

Complete your journey to becoming a fully-fledged AI-savvy leader by exploring the other remaining key topics, including Search, Knowledge, Uncertainty, Optimization, Machine Learning, and Language.

Neural Networks

Like What You Read? Eager to Learn More?
Follow me on
Medium or LinkedIn.

About the author:
Yannique Hecht works in the fields of combining strategy, customer insights, data, and innovation. While his career has been in the aviation, travel, finance, and technology industry, he is passionate about management. Yannique specializes in developing strategies for commercializing AI & machine learning products.


26 Words About Neural Networks, Every AI-Savvy Leader Must Know was originally published in Towards AIβ€Šβ€”β€ŠMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓