
DeepSeek-R1: Why This Open-Source AI Model Matters
Last Updated on January 27, 2025 by Editorial Team
Author(s): Paul Ferguson, Ph.D.
Originally published on Towards AI.
New AI models emerge almost weekly, making it hard to distinguish between the significant improvements and the minor updates. DeepSeek-R1, however, represents a clear exception.
While its performance matches or slightly exceeds leading proprietary models (like OpenAIβs o1) in many tasks, there are three reasons why this model is important:
- Cost Efficiency: Trained for only 5β10% of the cost of comparable models
- Open Accessibility: Fully open-source under an MIT licence
- Technical Innovation: Novel methods like self-taught reasoning with task-focussed processing
What makes this model so interesting isnβt just its performance but how itβs achieved: its open-source framework and over 90% cost reduction put pressure on closed systems to innovate while enabling businesses to deploy advanced AI affordably.
In summary: its combination of efficiency, transparency, and adaptably set a new benchmark for the industry.
Competitive Performance Without the Premium Price
Independent benchmarks show DeepSeek-R1 performs comparably to closed models in a number of domains: this challenges the assumption that open-source AI lags behind proprietary systems:
While itβs marginally behind in general knowledge (e.g., MMLU: 90.8% vs. 91.8%), it has clear advantages in technical tasks: making it particularly suited for software engineering, financial modelling, and scientific research.
Open-Source Design
Closed models require costly API subscriptions, whereas DeepSeek-R1βs MIT licence allows for:
- Full customisation: Modify the model for niche applications (e.g., healthcare,, legal contract analysis, etc).
- Local deployment: Smaller variants (1.5Bβ70B parameters) run on consumer-grade GPUs (avoiding cloud fees). In a previous article, I discussed the growing importance of Small Language Models, and some of these models fit neatly into that category).
- Transparency: Independent audits of model weights to address bias or safety concerns.
Novel Methods
DeepSeekβs cost and efficiency advantages come within three main areas:
Reinforcement Learning First
- Self-taught reasoning: Learns through trial and error problem solving rather than expensive human feedback
- Discovery phase: Explores new strategies (e.g., it will attempt to verify its own answers)
- Alignment phase: Refines outputs for coherence and accuracy
Predicting Two Steps Ahead
- Training: Forecasts the next two tokens at once
- Inference: Produces answers faster through parallel token prediction
Sparse, Task-Specialised Processing
- Only 5.5% of parameters (37B/671B) are activated per query
Cost Savings
DeepSeekβs pricing changes what businesses can achieve with limited budgets:
- Free to use via its web app.
- Although for business use cases, these are typically carried out through API calls
- Provides API access at a relatively low cost ($0.14 for 1 million input tokens, compared to $7.5 for OpenAIβs o1 model)
- To companies with significant usage of LLMs, these differences can add up to thousands of dollars over the course of a month
Implications
- Democratisation: Smaller companies can more easily compete with larger businesses.
- Pressure on Closed Models: Companies like OpenAI are under pressure to reduce their prices or increase the transparency of their models.
- Ethical Trade-Offs: Though open weights help in bias mitigation, unregulated customisation risks misuse.
Conclusion
DeepSeek-R1 proves that AI progress does not have to rely on closed systems or unsustainable compute budgets.
For organisations, this means faster experimentation, lower barriers to entry, and control over AI tools: a combination likely to accelerate innovation across different industries.
While not flawless, its open-source model and technical ingenuity set a new standard for whatβs possible in efficient, accessible AI.
If youβd like to find out more about me, please check out www.paulferguson.me, or connect with me on LinkedIn.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI