From Newton to Neural Networks: Why Hallucinations Remain Unsolvable
Last Updated on April 29, 2025 by Editorial Team
Author(s): MKWriteshere
Originally published on Towards AI.
The Mathematical Paradox at the Heart of AIβs Greatest Challenge
When you marvel at a large language modelβs capability, remember it rides on centuries-old mathematics.
Newtonβs discovery of the derivative laid the groundwork for backpropagation; the same principle guides every weight adjustment in a neural network today.
By pairing his candlelit study with a glowing AI βbrain,β we reveal that hallucinations arenβt a modern bug but an echo of this foundational technique.
These AI systems, designed to process and generate human-like text with remarkable fluency, are increasingly βhoist with their own petardβ β undone by the very mechanisms that make them powerful.
As OpenAI finds itself puzzled by rising hallucination rates in newer models (o3 and o4 ), weβre witnessing what many AI skeptics have long predicted: a fundamental limitation that may be inherent to transformer architecture itself.
The connection between Newtonβs calculus and modern AI is more than just historical trivia β itβs the key to understanding why hallucinations persist as an unsolvable problem.
Neural networks fundamentally rely on optimization techniques that trace back to Newtonβs work on derivatives. Backpropagation, the algorithm that powers learning in these systems, is essentially the application of the chain rule of calculus to adjust weights and minimize error.
This mathematical lineage reveals something profound: the… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI