Big Hype? Why OpenAI-o3 Only Excels in STEM and How Reasoning AI Is Built
Last Updated on December 25, 2024 by Editorial Team
Author(s): Don Lim
Originally published on Towards AI.
This member-only story is on us. Upgrade to access all of Medium.
Does OpenAI-o3 Mean AGI Is Here? This Article Examines Why OpenAI-o3βs Skills Donβt Translate to True Intelligence Yet.
Photo by ilgmyzin on UnsplashSome sources indicate that OpenAIβs o3 model released yesterday demonstrates impressive capabilities in math, coding, and science, exceeding previous models in standardized tests and benchmarks. [1] However, a great performance in STEM (science, technology, engineering, and math) doesnβt necessarily mean itβs capable of true Artificial General Intelligence (AGI). AGI would require a broader range of cognitive abilities and understanding of the world, which o3 may not yet possess.
Bluntly speaking, many recent AI models are designed to excel in known benchmarking tests. OpenAI-o3 might not represent a fundamental leap like the transition from GPT-3 to GPT-4, which involved a significant change in the underlying architecture.
The jump of the version number from 1 to 3 may give us an impression of a big progress. However, considering that o1 was released only about three months ago (OpenAI-o1-preview was released on September 12, 2024), o3 might be an incremental improvement largely resulting from training on a larger dataset, particularly focusing on chain-of-thought (explained in the later section) examples and solution examples related… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI