Single Input Neuron — What is it?
Last Updated on July 25, 2023 by Editorial Team
Author(s): Sujeeth Kumaravel
Originally published on Towards AI.
A neuron is a system that takes an input, performs a computation on that input, and gives an output. The following is the block diagram of a neuron that takes a single numerical scalar value x as input and outputs another single numerical scalar value y as output. The explanation of the computation relating the input to the output follows the diagram.
Input number x is multiplied by a number w which is called the weight to give the product wx. The other input to the add block is 1 (one), which is multiplied by another number b which is called bias, the output product being b obviously. These 2 products wx and b are added.
The function f in the next block takes in wx + b and outputs f(wx + b). f is called the transfer function, also referred to as the activation function. There are multiple choices for this function depending on the task.
w and b are parameters of the neuron that can be adjusted according to a learning rule which adjusts the parameters to achieve a certain goal depending on the task at hand. f is a function that can be selected manually according to the task.
One way of looking at the above computation is:
So, the dot product between the parameter vector and input vector is taken, and a function is applied to it and given as output. The dot product between two vectors is the projection of one vector on the other. It can also be interpreted as the similarity between the two vectors.
The dot product between two vectors can also be thought of as the correlation between them. It can be thought of as the amount one vector agrees with the other as well. So, there are four meanings for the dot product — projection of one vector on the other, the similarity between the two vectors, the correlation between the two vectors, and the amount one vector agrees with the other.
So, a neuron maps the similarity between the parameter vector and the input vector to the output through the transfer function. It maps the amount the parameter vector agrees with the input vector to the output through the transfer function.
In neural network terminology, the output y is called a representation of the input x. The output y is how the parameter vector and the function f look at x. In other words, y is x looked through the eyes of the parameter vector and f.
Let’s look more at the journey the input x takes to becoming the output y:
First, the input x is mapped from a 1-dimensional topological vector space to a 2-dimensional one. 1-D topological vector space is represented similar to the number line.
The input x lives in this 1-D space.
2D topological vector space is represented as
The 1D input x is mapped to the 2D space before getting fed into the add block of the neuron. Representation of the input in the 2D space is
Then the dot product between the parameter vector and the input vector is taken. This dot product also lives in the 1D space as it is a scalar. It is then mapped through the function f into the output y in the 1D space.
Overall, the input x gets mapped from one location in the 1D space to another location y.
Thus a neuron is a filter through which the input x passes, and the filtered output is y.
The neuron is also a computational graph. A computational graph is one in which each node represents a computation and edges represent the direction of data flow.
Usually, the two computational nodes are represented as a single node.
Multiple inputs can also be passed into a neuron. Neurons can be connected to form a network that can do powerful operations. More about this in later posts.
Signing off now!
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI