
Have o1 Models Solved Human Reasoning?
Last Updated on April 19, 2025 by Editorial Team
Author(s): Nehdiii
Originally published on Towards AI.
OpenAI made waves in the AI community with the release of their o1 models. As the excitement settles, I feel itβs the perfect time to share my thoughts on LLMsβ reasoning abilities, especially as someone who has spent a significant portion of my research exploring their capabilities in compositional reasoning tasks. This also serves as an opportunity to address the many βFaith and Fateβ questions and concerns Iβve been receiving over the past year, such as: Do LLMs truly reason? Have we achieved AGI? Can they really not solve simple arithmetic problems?
The buzz around the o1 models, code-named βstrawberry,β has been growing since August, fueled by rumors and media speculation. Last Thursday, Twitter lit up with OpenAI employees celebrating o1βs performance boost on several reasoning tasks. The media further fueled the excitement with headlines claiming that βhuman-like reasoningβ is essentially a solved problem in LLMs.
Without a doubt, o1 is exceptionally powerful and distinct from any other models. Itβs an incredible achievement by OpenAI to release these models, and itβs astonishing to witness the significant jump in Elo scores on ChatBotArena compared to the incremental improvements from other major players. ChatBotArena continues to be the leading platform for… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI