A Riddle That 99% Of Large Language Models Get Wrong
Author(s): Meng Li
Originally published on Towards AI.
Meng Li creates with DALLΒ·E 3
I have delved deeply into numerous large language models, but with each new model release, there always comes a slew of tedious benchmark tests.
To be honest, these academic evaluations are nearly incomprehensible to the average user, akin to reading an arcane script.
Iβve always wondered, is there a simpler way to reveal a modelβs reasoning ability with just one question?
After countless trials and validations, Iβve finally found such an intriguing question, which acts like a riddle:βI hang 7 shirts out to dry in the Sun. After 5 hours, all shirts are dry. The next day I hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? Take a deep breath and proceed step by step.β
Next, I will use 6 large models to answer this question, letβs witness their performance together!
The answer is at the end of the article.
However, donβt rush to see the answers, first ponder it yourself, and see if you can outsmart these large models.
Introducing Sora: Creating video from text
openai.com
Meng Li uses the interface
Answer: 5 hours.
Google recently released the Gemma open-source AI model.
For specific usage, you can refer to this article:
Recently, they released two… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI