AI Hallucinations: Why Large Language Models Make Up Information and How to Address It
Last Updated on February 3, 2025 by Editorial Team
Author(s): Rohan Rao
Originally published on Towards AI.
This member-only story is on us. Upgrade to access all of Medium.
Photo by julien Tromeur on UnsplashI was going through a few basic topics of AI and suddenly I found βAI hallucinationsβ. So letβs talk about it today.
Just like humans have hallucinations, AI has hallucinations too! They are competing with us in every possible way.
Everything depends on how LLMs perform. When LLMs generate output or information that appears to be right but is incorrect or fabricated factually to make it appear to be true then it is termed as βAI hallucinationsβ.
What exactly happens? β Well, AI hallucinations result in small-scale inaccuracies, assertions that appear to be true but itβs not, etc.
This makes them challenging to detect and manage.
As we all know LLMs require some data to learn and identify patterns. The data is not in hundreds or thousands, but millions.
Itβs okay to work with hundreds or thousands of data when you just starting to learn, but not a large-scale research.
Since LLMs learn by identifying patterns using enormous amounts of data, these models truly donβt understand what patterns they are learning or what should be the accuracy. It all depends on the numeric calculations.
If training data contains inconsistencies, it is highly… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI