Programming a Neural Network Tutorial: Vintage Style
Last Updated on December 11, 2023 by Editorial Team
Author(s): Adam Ross Nelson
Originally published on Towards AI.
Underlying math of neural networks by studying our predecessorβs ways
One of the first to describe neural networks was neurophysiologis Warren McCulloch and mathematician Walter Pitts, as a model for biological brains.
In 1959 Bernard Widrow and Marcian Hoff of Stanford adapted the idea to create MADALINE, the first neural network put into production to eliminate echoes in phone lines. Itβs still in use today! (Stanford History of Neural Networks).
Image Credit: Authorβs illustration created in Canva with images from Dall-E.
The code, tools, platforms, and related methods available to Widrow were not the code, tools, platforms, and related methods we have today. A relatively simple neural network today only requires a few lines of code.
As such many, or perhaps most, tutorials focus on a much simpler approach. These simpler approaches leverage highly developed and well-engineered software libraries called packages.
The use of these packages is often called abstracted approaches because they hide much of the underlying complexity and mathematics of neural networks. a result, users can focus more on designing and implementing the networkβs architecture rather than getting bogged down by the lower-level computational details.
But in the learning journey, that presents a trade-off. Those abstractions can make it more difficult to fully learn, understand, and appreciate the underlying math and other related principles… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI