AI Hallucinations
Author(s): Paul Ferguson, Ph.D.
Originally published on Towards AI.
Where Artificial Intelligence Meets Artificial Imagination
Image generated by Dall-E
In an age where AI can outperform humans in complex tasks, itβs also spinning tales that would make Baron Munchausen blush. Large Language Models (LLMs), the crown jewels of artificial intelligence, are unintentionally becoming the worldβs most sophisticated liars. From courtrooms to hospitals, these digital savants are confidently sharing fiction that could rewrite case law or misdiagnose patients.
Welcome to the world of LLM hallucinations, where artificial intelligence meets artificial imagination, and the stakes couldnβt be higher.
LLM hallucinations occur when an AI model generates text that is factually incorrect, nonsensical, or unrelated to the input prompt. These outputs often appear convincing and coherent, making them particularly problematic.
For example, an LLM might confidently state, βAbraham Lincoln invented the telephone in 1876 to communicate with troops during the Civil War.β
This statement combines real historical figures and events but presents entirely false information as fact, showing how LLM hallucinations can blend truth and fiction in misleading ways.
Interestingly, what we call βhallucinationsβ are actually a fundamental aspect of how LLMs operate: these models donβt retrieve stored information but generate text by predicting the most likely next word based on learned patterns.
In essence, LLMs are always βmaking stuff upβ: we just call it… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI