Why GPT-4 Data Isn’t As Reliable As You Think
Last Updated on April 22, 2024 by Editorial Team
Author(s): John Loewen, PhD
Originally published on Towards AI.
How GPT-4's verified data became my misinformation nightmare
DALL-E image: GPT-4 selling a fake bill of goods.
GPT-4 will “find” whatever you ask it to look for — this includes “research” on statistical information that you may want.
It will find it, and present it in whatever way you ask it to — in a paragraph, a chart, as a poem if you like.
However, as a heavy user over the past year, I have noticed an alarming glitch in how GPT-4 responds to specific statistical research queries.
It’s not accurate — it uses “placeholder” data — without telling you.
How do I know this? First-hand experience.
GPT-4 sold me my own fake news as fact — from my own previously published AI-generated article.
Let me explain what happened.
In the early days of GPT-4, while experimenting with the tool, I wrote an article about how GPT-4 was being heavily adopted by Python programmers to improve their coding workflow.
During the writing of the article, I prompted GPT-4 for some statistics, and it happily generated them for me in the form of 2 sources:
A Stack Overflow Survey from March/2023A Github survey from April/2023
Being a new writer AND new to GPT-4, I thought to myself this is awesome.
Not knowing much about the tool, and in a rush (yes,… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI