Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Implement Your First Artificial Neuron From Scratch
Deep Learning

Implement Your First Artificial Neuron From Scratch

Last Updated on November 14, 2021 by Editorial Team

Author(s): Satya Ganesh

Understand, Implement, and visualize an Artificial Neuron from scratch usingΒ python.

Photo by jesse orrico onΒ Unsplash

Introduction

Warren McCulloch and Walter Pitts first proposed artificial neurons in the year 1943, and it is a highly simplified computational model which resembles its behavior with neurons possessed by the human brain. Before we dig deeper into the concepts of artificial neurons let us take a look at the biological neuron.

image source: Wikipedia

Biological Neuron

The biological neuron receives input signals through dendrites and sends them to the soma or cell body which is the processing unit of a neuron, the processed signal is then carried through the axon to other neurons. The junction where two neurons meet is called a synapse, the degree of synapse tells the strength of signal carried to other neurons.

Artificial Neuronβ€Šβ€”β€ŠMcCulloch Pitts Neuron (MPΒ Neuron)

McCulloch Pitts Neuron model is also called a Threshold Logic Unit (TLU), or Linear Threshold Unit, it is named so because the output value of the neuron depends on a threshold value. The working of a biological neuron inspires this artificial neuron, it is structured and meant to behave similarly to biological neurons.

image by theΒ Author

Input Vector [x₁ xβ‚‚ x₃…xβ‚™]β€Šβ€”β€ŠIn an Artificial neuron, the input vector act as inputs to the neuron, it behaves similar to dendrites in Biological Neuron.

Function f(x)β€Šβ€”β€ŠIn an Artificial neuron, the function f(x) is a summation function, which behaves similar to soma in Biological neurons.

A. Model of MPΒ Neuron

We can define the Model as an approximation function, of the true relationship between the dependent and independent variables.

[x₁ xβ‚‚ x₃ … xβ‚™]β€Šβ€”β€Šinputs or attributes of the MP Neuron Model.

bβ€Šβ€”β€ŠThreshold Values which is the only parameter in the MP Neuron Model.

The function g(x) performs a summation of all the inputs and the function f(x) applies a threshold value to the output returned by the function g(x). The value returned by the function f(x) is a boolean value, that is, if the summation of inputs is greater than the fixed threshold(b) then the neuron gets activated else the neuron is fired.

Things you need to know about MPΒ Neuron

  1. It takes only binary data, which is input vector X Ο΅ {0,1}.
  2. Task performed by the neuron is binary classification ie Y Ο΅ {0,1}.

Geometrical Interpretation of MPΒ Neuron

For simplicity let us consider there are only two features in the dataset, so the input vector looks like, x = [x₁ xβ‚‚], and the functions look like…

converting it into the form of linear equation ie, y = m*x + c

compare equations(1) and (2), we find that…

The slope of the line m = -1 (it is fixed for any dataset).

The y-intercept of the line c= b (the only thing we can change to tune the model).

Note: all the points that lie on top of the line are classified as positive(1) and all the points that lie below the line are classified as negative(0).

when we plot the line x2 = -x1 + 1

image by theΒ author

SummaryΒ :

  1. All the points above the line (green points) are classified as positive.
  2. All the points below the line (red points) are classified as negative.
  3. MP Neuron model works only when the points are linearly separable.
  4. The slope of the line in the MP Neuron model is fixed that is -1.
  5. We have got the power to change the value of y-intercept(b).

B. LossΒ function

Loss is the error or mistake that the model has incurred during its training phase, it can be calculated using the mean squared error loss function.

C. Optimization Algorithm

since the only parameter in the model is b, we need to choose the best value of b such that the loss is reduced.

We can choose the b value using a brute force approach since the range of b is from [0,n], where n is the number of features in data.

  1. b takes a minimum value when x₁ + xβ‚‚ + x₃ … xβ‚™ =0, ie when feature vector looks like [x₁ xβ‚‚ x₃ … xβ‚™]= [0 0 0 …0]
  2. b takes a maximum value when x₁ + xβ‚‚ + x₃ … xβ‚™ =n, ie when feature vector looks like [x₁ xβ‚‚ x₃ … xβ‚™] = [1 1 1 … 1]

since the value of b lies between [0,n], we can compute loss the model incurred for each value of b and pick the best one among them.

D. Evaluation

we can evaluate the performance of the model using this simple formula

E. Limitation of MP NeuronΒ Model

  • The Model accepts data only in the form of {0,1}, we can not feed it with real values.
  • It is only used for Binary classification.
  • It performs well only when data is linearly separable
  • The line equation has a fixed slope, so there is no flexibility in changing the slope of the line.
  • We can not judge which feature is more important and we can not give priority to any feature.
  • The learning algorithm is not so impressive, we are using a brute force approach to find the threshold value.

Let’s Code…

Data Requirements

we use the breast cancer dataset present in the sklearn datasets package, and our task is to predict if the person has cancer or not based on the data provided

A. Importing essential libraries

B. LoadingΒ Data

C. Visualizing Data

image by the author (output of above codeΒ snippet)

From the above plot, we infer that data is not in binary form but for the MP neuron model requires we require data to be in binary form that is {0,1}. So let’s convert this data into binary form. before that, we need to split the data into train data and test data

D. Splitting Data into train and testΒ data

E. BinningΒ data

Here we convert the data into zeros and ones, to make it compatible with the MP Neuron model, we do this using the cut method in the pandas library.

image by the author (data afterΒ binning)

F. Defining the MPΒ Neuron

G. Evaluation

image by the author (output of above codeΒ snippet)

on the training data the performance of the model is pretty good, ie 84.6%, let visualize the performance of the model for different values of b.

image by theΒ author

performance on testΒ data

image by theΒ author

Conclusion

We build a very simplified computational model of a biological neuron, and we got 78% accuracy on test data that is not bad for such a simple model. I hope you learned something new from this article.

References

Thanks for reading πŸ˜ƒ Have a nice day

Feedback ↓