Hallucination Has a Twin Brother You Probably Never Heard About
Author(s): Pawel Rzeszucinski, PhD
Originally published on Towards AI.
Introduction
Artificial Intelligence (AI) has increasingly become a part of our daily lives, offering tools that extend our capabilities and streamline our work. Among these, solutions like Copilot have emerged as supposedly invaluable assistants in for increasing our productivity. Are they there yet? While doing research for my last article titled βAI, Humans, and Loopsβ, I stumbled upon an unexpected occurrence β a brief change of topic in the middle of an answer to my question. See below:
βHuman oversight in AI systems can also help build trust by ensuring transparency in the systemβs operation. Unlike HITL, where a human can halt systems, HOTL allows humans to maintain meaningful control or interaction. The rise of nationalism in various regions within the empire, notably the Serbs, Greeks, and Bulgarians, seeking autonomy, significantly weakened the empireβs control over its territories. This balance between leveraging the benefits of AI and ensuring alignment with human values and societal norms is a key aspect of the HOTL concept.β
This event piqued my curiosity, leading me to explore the nuances of the so called digressions within the context of Large Language Models (LLMs) and how they differ from the more commonly discussed phenomenon of hallucinations.
In the next section, Iβll delve into the specifics of how I discovered the digression while working on my original article, setting the stage for a deeper exploration of the differences between digressions and hallucinations in the world of LLMs.
Discovering the digression
Digressions, as I learned, are deviations from the main topic, offering additional, albeit sometimes unrelated, insights. They are different from hallucinations, which are outright inaccuracies or fabrications by the model. With the help of ChatGPT, I delved deeper into the nature of that confusing paragraph of mine, understanding how and why these digressions happen.
While crafting my article on βAI, Humans, and Loops,β I leaned heavily on Copilot as a research assistant. Iβve been using ChatGPT for over a year now, and given access to Copilot recently, I was keen to put it into a test. It was during one of these sessions, deep in the creative process, that I stumbled upon an unexpected shift in the narrative. As you can tell for yourself, this deviation wasnβt just a slight veer off course; it was an exploration into a topic completely unrelated with to the main subject. Initially, this discovery was met with surprise. The digression, nestled within a section meant to focus on AI, instead offered a glimpse into a geopolitical subject.
Curious about this occurrence, I began to question the nature of digressions in the realm of AI-generated content. How could a tool designed to assist in writing veer off so significantly from the intended path? This question led me to initiate a dialogue with ChatGPT, seeking to understand the mechanics behind such deviations and the difference between these digressions and the hallucinations often discussed in the context of language models.
Digressions vs. hallucinations
Digressions and hallucinations, though occasionally confused, signify distinct phenomena in the context of LLMs. A digression, as I discovered, is a departure from the main topic or argument, offering additional insights or exploring related themes. This can enrich the content but also lead to unintended diversions. In contrast, hallucinations refer to instances where the model generates false information, inaccuracies, or fabricates data not grounded in reality.
Digression: model momentarily diverges from the primary subject or question to explore or mention unrelated or only loosely related information, before eventually getting back on track.
Hallucination: model generates information that is factually incorrect or fabricates details that have no basis in the input provided or in reality.
Digressions? Okβ¦but why?
My interaction with ChatGPT shed light on the nature of digressions. Through a detailed discussion, I gained insights into how these deviations occur.
First I was interested in learning how come a discussion on AI could suddenly shift to nationalism and empires. While ChatGPT offered a couple of potential explanations, the one that appealed to me the most was the following:
Associative Connections: The model might draw associative links between concepts that are not directly relevant to the main topic. In this case, the mention of βcontrol over territoriesβ in the context of AI oversight could have inadvertently triggered associations with historical or geopolitical content in the modelβs training data, leading to the unexpected introduction of nationalism and empire control.
Makes sense, right? Since LLMs generate text by predicting the next most probable word, in a autoregressive matter, it might be possible that a couple of words and phrases that have a rather universal meaning, could de-rail the main chain of thought, and start a tangential topic.
If so, however, how come the model came back on track and continued the answer in the original context of AI?
Contextual Cues: LLMs are trained on vast datasets and develop an ability to follow contextual cues. The paragraph likely contained enough contextual information about AI and human oversight before the digression that helped the model to return to the main topic. Keywords or phrases related to AI and oversight could have acted as anchors, guiding the model back to the original theme.
So even though it wandered off for a sentence or two, the general context of the previously generated response, as well as the overall subject in question, were strong enough to attract the chain of though back to the main track.
Summary
The journey from the initial discovery of a digression in my article to understanding its mechanics with ChatGPTβs assistance has been enlightening. It showcased the nuanced and sometimes unpredictable nature of working with LLMs. While these tools offer incredible support in content creation, their propensity for digressions and hallucinations highlights their imperfections.
This adventure underscores a broader message: while LLMs are brilliant aids in the realm of writing and research, they are not flawless. Their use demands caution, critical thinking, and a discerning eye. As we navigate this partnership with AI, let us embrace its potential with awareness and responsibility, ensuring that our reliance on these advanced tools enhances, rather than undermines, the quality and integrity of our work.
One final remark: Iβve been using ChatGPT for long months now, and never came across a digression. Iβve been testing Copilot for a week and stumbled upon this. Coincidence?
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI