Demystifying DPKD: How Preference Knowledge Distillation Boosts Small AI Models 🚀
Last Updated on October 28, 2025 by Editorial Team
Author(s): Aniket Sanyal
Originally published on Towards AI.
Introduction: Big Brains vs Small Brains in AI 🧠
Large Language Models (LLMs) like GPT-4 and other advanced chatbots have amazing capabilities, but they come with a catch: they are huge and computationally expensive. Imagine having a brilliant AI tutor that can answer anything, but it only runs on a supercomputer — not very practical for everyday apps or devices. What if we could shrink these AI brains into smaller models that are cheaper and faster, without losing too much of their intelligence? This is where knowledge distillation comes in.

In this article, the authors present Direct Preference Knowledge Distillation (DPKD), a new method that enhances traditional knowledge distillation by allowing a larger model to teach not just answers but also its preferences for good responses. The DPKD process is divided into two stages: first, aligning the student model with the teacher’s preferred answers, and second, fine-tuning the student to prioritize the teacher’s outputs. This approach shows substantial improvements over standard training methods, leading to student models that perform closer to their larger counterparts across various tasks and complexities, paving the way for more efficient AI deployments and offering insights into model training methodologies.
Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI Academy
We Build Enterprise-Grade AI. We'll Teach You to Master It Too.
15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.
Start free — no commitment:
→ 6-Day Agentic AI Engineering Email Guide — one practical lesson per day
→ Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages
Our courses:
→ AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.
→ Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.
→ AI for Work — Understand, evaluate, and apply AI for complex work tasks.
Note: Article content contains the views of the contributing authors and not Towards AI.