
What Are World Models?
Last Updated on May 12, 2025 by Editorial Team
Author(s): Luhui Hu
Originally published on Towards AI.
What Are World Models?

More and more people ask me what world models are, including investors, AI enthusiasts, and AI scientists. As world models (WMs) gain traction across AI research and application domains, it’s important to unpack what they really are, why they matter, and how they differ from other dominant approaches like vision-language-action (VLA) models. In this post, I’ll break down what WMs are, what makes them powerful, and why they may be a foundational pillar for physical AI and artificial general intelligence (AGI).
🧠 What is a World Model?
A world model is a learned internal representation that simulates the dynamics of the real world. Unlike static perception models, world models are generative and predictive — they can simulate how the world might evolve over time, allowing intelligent agents to plan actions and reason before actually performing them.
Some of the most advanced world model initiatives come from:
- Meta FAIR: Advocating predictive architectures based on self-supervised learning.
- Stanford’s World Labs (Fei-Fei Li): Focused on spatial intelligence and 3D scene understanding.
- NVIDIA Cosmos: Builds large-scale generative models like Cosmos Predict, Transfer, and Reason1 for simulating environments.
- ZhiCheng AI World Model: Focused on robotic physical intelligence.
These models differ in implementation but share a common goal: providing agents with an internalized understanding of their environment.
⚙️ Core Components and Mechanisms of World Models
World models are built upon several key components:
- Multimodal Input Processing: They take in video, images, sensor streams (e.g., LiDAR, IMU), and sometimes language to create a unified representation.
- Temporal Prediction: Models like Dreamer or Cosmos Predict learn to forecast future frames or states from historical data.
- Latent Representation Learning: Rather than operating on raw inputs, WMs use abstract state spaces learned through encoders and tokenizers.
- Self-Supervised Learning: Training is often done via objectives like next-step prediction, contrastive learning, or reconstruction.
- Simulation and Reasoning: Once trained, WMs can simulate various what-if scenarios, essential for planning, safety, and adaptation.
🔍 How World Models Work and Where They Apply
World models operate in three general phases:
- Data Intake: Multimodal sensory data is collected and tokenized into compact representations.
- World Learning: The model learns a mapping from current states and actions to future states (environment dynamics).
- Simulation & Planning: Inference involves simulating future outcomes and selecting optimal actions.
These steps allow world models to power applications such as:
- Autonomous driving (e.g., predicting road scenarios)
- Robotics (e.g., manipulation, locomotion)
- Synthetic data generation (e.g., for training other AI models)
- Embodied reasoning (e.g., physical common sense)
📊 How World Models Differ from Vision-Language-Action (VLA) Models
VLA models, such as RT-2 or OpenVLA, excel at interpreting instructions and responding with actions, using large-scale vision and language data. However, they typically do not build an internal model of the world.



🚀 Modern Robotics AI: Technological Main Streams and Their Differences
Modern robotics AI now spans multiple technological streams. Each has a different philosophy and engineering trade-off:
✅ Model-Based Control
- Based on physics and optimization.
- High precision but low adaptability.
🧠 Deep Reinforcement Learning (DRL)
- Policy learned via trial and error.
- Powerful but data inefficient.
🤖 World Models
- Predictive planning through internal simulations.
- Ideal for forward reasoning and adaptation.
🔢 Vision-Language-Action (VLA)
- Language and perception-driven agent control.
- Highly generalizable, but physically shallow.
👩💼 Teleoperation + Learning from Demonstration (LfD)
- Bootstraps models from human demos.
- Low data needs but less scalable.
📊 Multimodal Sensor Fusion & Spatial AI
- Combines vision, tactile, and proprioception.
- Rich but computationally heavy.
These approaches are not mutually exclusive. For example, a robot may use world models for planning, VLA for instruction following, and sensor fusion for real-time perception.


🌟 Final Thoughts
World models are not just another AI architecture — they represent a paradigm shift toward internalized understanding, simulation, and prediction. In a future where physical AI must act, adapt, and learn continuously, world models offer the brain-like core needed for general-purpose agents. While not perfect yet, they form the bedrock of intelligent physical interaction, marking a vital step toward embodied AGI.
📚 References
🔬 Academic Foundations
1. Ha & Schmidhuber (2018). World Models
https://arxiv.org/abs/1803.10122
The original paper that introduced the concept of using generative models (VAE + RNN + controller) to simulate environments for agents.
2. Hafner et al. (2019–2023). Dreamer, DreamerV2, DreamerV3
https://arxiv.org/abs/1912.01603
https://arxiv.org/abs/2005.12114
https://arxiv.org/abs/2301.04104
Progressive work from DeepMind on learning latent world models for reinforcement learning through imagination.
🧠 Industry Research
3. Meta AI (Yann LeCun). A Path Towards Autonomous Machine Intelligence
https://openreview.net/pdf?id=BZ5a1r-kVsf
A visionary blueprint for self-supervised, predictive world models as the core of intelligent agents.
4. NVIDIA Technical Blog. Scale Synthetic Data and Physical AI Reasoning with NVIDIA Cosmos World Models
https://developer.nvidia.com/blog/scale-synthetic-data-and-physical-ai-reasoning-with-nvidia-cosmos-world-foundation-models/
Overview of the NVIDIA Cosmos WFM platform for physics-aware simulation and AI reasoning.
5. Fei-Fei Li’s World Labs — Coverage from:
Focused on spatial intelligence and grounding perception in 3D environments.
🤖 Related AI Architectures
6. Google DeepMind (RT-2). RT-2: Vision-Language-Action Models
https://robotics-transformer2.github.io
Demonstrates how large VLA models operate and how they differ from simulation-centric world models.
7. OpenVLA: An Open Vision-Language-Action Benchmark
https://openvla.org
Useful for contrasting policy-driven multimodal AI with simulation-centric approaches.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.