No Libraries, No Shortcuts: LLM from Scratch with PyTorch
Last Updated on October 4, 2025 by Editorial Team
Author(s): Ashish Abraham
Originally published on Towards AI.
The no BS guide to build, train, and fine-tune a Transformer architecture from scratch
OpenAI has recently launched its highly anticipated open-source GPT-OSS models, a moment that invites a minute of reflection on just how far we’ve come. Years back, even before ChatGPT, I remember reading an article on a GPT model, probably GPT-2, that writes its own essays and poems, and they were just experiments. Fast forward to today, it has already become an integral part of my daily life. And it all started with the landmark “Attention is All You Need” publication in 2017 by Google Research. The Transformer architecture was proposed, which soon powered the very first GPT — GPT-1 (Generative Pretrained Transformer) in 2018.

The article discusses the evolution of Large Language Models (LLMs) from the introduction of the Transformer architecture to the recent developments in open-source GPT models. It provides a comprehensive breakdown of how to build and train an LLM using PyTorch, covering various components of the Transformer framework, including tokenization, attention mechanisms, and training strategies. The author emphasizes the significance of fine-tuning LLMs for specific tasks and the impact of these technologies on modern AI applications.
Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI Academy
We Build Enterprise-Grade AI. We'll Teach You to Master It Too.
15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.
Start free — no commitment:
→ 6-Day Agentic AI Engineering Email Guide — one practical lesson per day
→ Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages
Our courses:
→ AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.
→ Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.
→ AI for Work — Understand, evaluate, and apply AI for complex work tasks.
Note: Article content contains the views of the contributing authors and not Towards AI.