From Training Language Models to Training DeepSeek-R1
Author(s): Akhil Theerthala
Originally published on Towards AI.
Reasoning Models #1 — An overview of training
This member-only story is on us. Upgrade to access all of Medium.
You probably already understand the potential of reasoning models. Playing around with O1 or DeepSeek-R1 shows us these models’ enormous promise. As enthusiasts, we are all curious to build something like these models.
We all start on this path, too. However, from the sheer scale of things, we get overwhelmed by where we can start. Rightfully so, earlier, around 6–7 years ago, we only needed an input and output to train a module. As someone who builds those models, we know that getting these two things right is hard. However, things are way more complex now. We need additional task-specific data for every task we do.
As an enthusiast, I want to dig deeper into these “reasoning” models and learn what they are and how they work. As a part of this process, I also plan to share everything I’ve learned as a series of articles to get a chance to discuss these topics with like-minded folks. So, please keep commenting and sharing your thoughts as you read this article.
Without delay, I’d like to dive into today’s topic — the… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI