Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

Diffusion Over Autoregression
Latest   Machine Learning

Diffusion Over Autoregression

Last Updated on April 16, 2025 by Editorial Team

Author(s): Anay Dongre

Originally published on Towards AI.

Diffusion Over Autoregression

Diffusion Over Autoregression
Image from Paper

Introduction

For years, autoregressive models (ARMs) have dominated large language models (LLMs), predicting tokens one at a time in a left-to-right fashion. But what if there’s a more efficient way? The LLaDA paper introduces a diffusion-based alternative to ARMs, fundamentally changing how we think about text generation. Instead of sequential token prediction, LLaDA reconstructs masked tokens through an iterative refinement process. This approach challenges key assumptions about autoregression and opens new frontiers in language modeling.

In this article, we will break down what LLaDA is, how it works, why it was developed, and its potential impact.

Why Move Beyond Autoregression?

Autoregressive models like GPT-4, LLaMA, and PaLM generate text sequentially, predicting one token at a time. This setup has limitations:

  • Slow Generation — ARMs require step-by-step decoding, making large models computationally expensive.
  • Limited Parallelism — Each new token depends on the previous one, preventing batch parallelism during inference.
  • Context Loss — While Transformers have attention mechanisms, sequential processing can still struggle with global coherence.

LLaDA replaces the autoregressive paradigm with a diffusion process, where an input sequence is progressively masked and then reconstructed. This allows bidirectional modeling of sequences, breaking free from autoregression’s constraints.

How LLaDA Works: A Step-by-Step Breakdown

LLaDA operates in two primary phases: Forward Masking and Reverse Reconstruction. Let’s go through them one by one.

1. Forward Masking (Diffusion Process)

LLaDA takes an input sequence and progressively masks random tokens. The masking ratio varies randomly, creating different training conditions. The process can be visualized as follows:

Example:
Original Sentence: The cat sat on the mat.
Masked: The [MASK] sat on [MASK] mat.

Unlike ARMs, which only predict one token at a time, LLaDA corrupts multiple tokens at once. This means the model learns to predict missing words in various contexts, improving its ability to recover information.

2. Reverse Reconstruction (Denoising Process)

Once a sequence has been masked, LLaDA attempts to reconstruct the missing tokens iteratively. It does so using a Transformer-based mask predictor that refines the sequence step by step. The reconstruction happens over multiple steps, progressively improving the predicted tokens.

At each step, the model:

  1. Predicts missing tokens using a learned distribution.
  2. Fills in the most likely candidates.
  3. Repeats the process until the sequence stabilizes.

This method closely resembles denoising diffusion models used in image generation (like Stable Diffusion). The key difference is that instead of removing Gaussian noise, LLaDA removes masked tokens.

Comparison with Autoregressive Models

FeatureAutoregressive (GPT, LLaMA)LLaDA (Diffusion)Generation OrderLeft-to-rightNon-sequentialParallelizationLimitedHighContext AwarenessPartialFull (bidirectional)EfficiencySlow due to sequential decodingPotentially faster with optimizations

Model Architecture

Image from Paper

LLaDA uses a Transformer-based backbone, similar to existing LLMs. However, key differences exist:

  • Mask Predictor Module: Instead of predicting the next token, this module fills in masked tokens across the sequence.
  • Diffusion-like Steps: The model generates outputs iteratively rather than in a single forward pass.
  • Scalability: Trained from scratch on 2.3T tokens, LLaDA’s 8B model matches the performance of LLaMA3 8B.

LLaDA was tested on multiple benchmarks and compared with autoregressive models. Key takeaways:

  • In-Context Learning: LLaDA 8B performs on par with LLaMA 3 8B in reasoning tasks.
  • Instruction Following: After supervised fine-tuning (SFT), LLaDA significantly improves multi-turn dialogue abilities.
  • Reversal Reasoning: LLaDA outperforms GPT-4o in tasks requiring backward reasoning (e.g., reversing a sequence of words).

Challenges and Limitations

While promising, LLaDA has challenges:

  • Inference Speed: Despite parallelism, iterative refinements can be slow. Optimized sampling techniques are needed.
  • No KV Cache: Standard KV caching (used in ARMs to speed up inference) does not work with LLaDA, requiring alternative efficiency strategies.
  • Memory Overhead: LLaDA’s iterative process requires multiple forward passes, increasing computational demand.

Future Directions & Improvements

1. Hybrid Diffusion-Autoregressive Models

A potential improvement is combining LLaDA’s diffusion approach with autoregressive decoding. This hybrid method could leverage ARMs for fast token generation while using diffusion-based refinements for accuracy.

2. Reinforcement Learning Alignment

Current diffusion-based LLMs lack reinforcement learning with human feedback (RLHF). Integrating RLHF could further improve instruction-following and factual consistency.

3. Efficient Sampling Techniques

Reducing the number of reverse diffusion steps is crucial. Techniques like learned guidance functions or deterministic solvers could help speed up inference.

4. Multi-Modal Extensions

Applying LLaDA to text, images, and speech simultaneously could expand its capabilities. Since diffusion models work well for images (e.g., Stable Diffusion), this approach seems promising.

Conclusion

LLaDA challenges the dominance of autoregression in LLMs. By replacing sequential token prediction with a diffusion-based approach, it introduces new ways to model language efficiently. While challenges remain — especially in inference speed — LLaDA opens up exciting research directions that could reshape how we build and deploy LLMs.

For more details, check out the official paper and demo:
🔗 Paper: https://ml-gsai.github.io/LLaDA-demo/
🔗 Hugging Face Model: https://huggingface.co/GSAI-ML
🔗 Hugging Face Model Demo: https://huggingface.co/spaces/multimodalart/LLaDA

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.