Navigating the Mind of GPT: How to Elicit Clarity and Avoid AI Hallucinations
Last Updated on November 5, 2023 by Editorial Team
Originally published on Towards AI.
When working with cutting-edge language models like GPT, we occasionally find ourselves stumbling upon “hallucinations.” A hallucination, in the context of a language model, is when the model generates information that isn’t accurate, is unsubstantiated, or is simply made up. Although GPT is trained on vast amounts of text and is very proficient at generating human-like responses, it isn’t infallible.
A challenge users often encounter is how to reduce these hallucinations without having access to the model’s inner workings or training methods. This blog post is here to help with that, offering a way to get more accurate and guarded responses from GPT solely through prompting.
Diving Into GPT’s Imaginary World!
Ever seen GPT spin a tale from thin air? Here is an example, GPT knows a lot about popular TV shows or other works of fiction but doesn’t know the details (like all the scenes or episodes). But toss in a quirky detail like a “giant hat,” and voila — you’ve got GPT daydreaming! In this episode of Community, the only hat Jeff wears is a graduate cap, no giant hats.
Understanding The Basics
> “Explain the C4 dataset to me, but if you’re unsure about any details, please say ‘I’m not sure.’”
Potential GPT Response:
> “The C4 dataset is a large dataset used in the training of models like GPT. It’s derived from web pages and is instrumental in providing diverse linguistic patterns for the model. However, I’m not sure about the exact number of web pages or the specific criteria used for their inclusion.”
In the example above, GPT generates a response that provides a general overview of the C4 dataset but admits uncertainty regarding specific details.
Crafting Careful Prompts
The trick to getting more accurate responses lies in the prompt’s phrasing. Here are some strategies to consider:
1. Be Specific: Asking vague questions will often lead to vague answers. Narrow down your inquiry.
2. Request Confirmation: Ask GPT to provide sources or admit when it’s unsure.
3. Limit the Scope: Instead of seeking a long answer, ask for brief responses and then follow up for clarity.
“I’m not sure” — Your New Safety Net
Adding qualifiers like “if you’re not sure, say ‘I’m not sure’” can be an effective way to reduce hallucinations. It offers GPT a “way out” instead of pushing it to guess or fabricate an answer. For instance:
> “Tell me about the origin of the Eiffel Tower, and if you’re not certain about any facts, mention ‘I’m not sure.’”
Potential GPT Response:
> “The Eiffel Tower is an iron lattice tower located in Paris, built as the entrance arch to the 1889 World’s Fair. It was designed by the engineer Gustave Eiffel, although he wasn’t the primary architect. I’m not sure about the exact duration of its construction.”
The “I’m not sure” clause can act as a filter, signaling the model to be more cautious with its output.
While the “I’m not sure” method is handy, it’s essential not to overdo it. Using it too often might yield overly cautious responses, where the model constantly defaults to saying, “I’m not sure” instead of providing potentially useful information. Balance is the key.
Additional Strategies to Enhance Prompt Clarity and Reliability
1. Ask Direct Questions: Ambiguous queries can lead to more instances of hallucination. When you ask direct questions, you offer less room for the model to wander.
Bad: “Tell me about datasets.”
Good: “Can you describe the C4 dataset and its key features?”
2. Encourage Fact-Checking: Ask GPT to cite its sources or state the confidence in its response.
“Explain the C4 dataset and if possible, cite a source or let me know if you’re unsure.”
3. Multi-step Queries: Breaking down your main question into smaller parts can be effective. This way, if the model is uncertain about one segment, it can express it without affecting the other parts.
“Firstly, what is the C4 dataset? Secondly, what is its main purpose? If you are uncertain about any part, indicate which one.”
4. Ask for Confirmation: After getting a response, you can ask the model to confirm its accuracy.
User: “What is the C4 dataset?”
GPT: “The C4 dataset is a large-scale dataset used for training language models like GPT.”
User: “Are you certain about that information?”
GPT: “Yes, I am.”
5. Utilize Follow-up Questions: Don’t just rely on the initial response. Dive deeper with follow-up questions. If the model is consistent in its answers, it’s a good sign. If it starts to waver or shows inconsistency, it’s a hint that it might be uncertain.
User: “Tell me about the C4 dataset.”
GPT: “The C4 dataset is a collection of diverse web text used for training models.”
User: “What kind of web text? Can you be more specific?”
GPT: “I’m not sure about the specific types of web texts included.”
Hallucinations can be a hurdle when interacting with models like GPT. However, with careful prompting, one can substantially reduce these occurrences. The beauty of GPT lies in its adaptability to prompts, giving users the power to guide the kind of responses they receive. By making your prompts more specific, requesting confirmatory details, and providing the model with a “way out” when it’s unsure, you can harness the best of what GPT has to offer while minimizing misinformation.
Remember, technology is only as effective as how we choose to use it. With the right approach, GPT can be a reliable and insightful tool in your information-gathering arsenal.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI