The Proof is in the Preference: Why DPO is the New RLHF
Last Updated on November 11, 2025 by Editorial Team
Author(s): DrSwarnenduAI
Originally published on Towards AI.
The Proof is in the Preference: Why DPO is the New RLHF
Stop debugging PPO. Direct Preference Optimization solved the alignment puzzle with a single, stable loss function.

The article discusses the limitations of traditional Reinforcement Learning from Human Feedback (RLHF) in achieving alignment in AI models, highlighting issues such as instability and complexity. It introduces Direct Preference Optimization (DPO) as a streamlined solution that directly trains models based on human preference data, eliminating the need for multiple systems and leading to more stable performance. Through comparing DPO with RLHF, the author argues that DPO not only addresses the key alignment challenges but also simplifies the training process, making it more efficient while ensuring the model learns effectively from human feedback.
Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.