AI: What the Heck is “Context Engineering”…
Author(s): Charles Thayer
Originally published on Towards AI.

tl;dr
- Context Engineering =~ Agent-tools-use + RAG (+ memory++)
- Prompt Engineer is to LLMs as Context Engineering is to Agents
- What does this mean practically for AI software engineers

Background
This year, so far, has been the rise of “agentic programming”, agents, and agent frameworks — so far. The term has been used so broadly that it’s generated lots of confusion for people, both leadership and software engineers, about all the tools that fit under that umbrella.
At the same time, in the zeitgeist, Agents have replaced RAG (retrieval augmented generation) as the current topic of interest. However, those of us writing AI code understand that the two are related BUT NOT exclusive. In fact, I see some RAG systems getting MCP wrappers so that those queries can be run by agents. That practice is about another term gaining traction, Agentic RAG (aka Agentic Retrieval), that I’ll touch on later.

In light of the two terms Agents and RAG, a new term is emerging: Context Engineering (see Google Trends below). Some would say that with agents, prompting has evolved to be both more powerful but to also require more attention to context and detail. With agents we’re not just asking questions, but specifying tasks and goals, sometimes to the point of writing out entire PRDs (requirements documents). With the new complexity, prompt engineering is transforming into context engineering, but for software engineers this is about more than just writing the text for prompts. Let me explain:
Context Engineering for Software Engineers
Context Engineering: any system that adds Context (e.g. text) to the prompt for LLMs, including:
RAG: retrieval systems that add context (in a static way in code or workflow) including (but not limited to):
- VectorDBs: which search semantically
- DBs / SQL: old school database queries
- GraphDBs: which search for relationships and links (among entities)
- Text search: traditional search systems (often BM25)
Agents: can add context dynamically based on which tools they can access:
- Agent systems call an LLM multiple times, updating the prompt each time, to build up context and generate a response.
- Use Tools (sometimes via MCP) to get more context for the prompt, such as fetching a web-page and putting that text into the prompt.
Memory: chat history may also be added to the prompt
- AI memory systems are improving a lot and today there are more options to choose from than ever.
- In addition to raw history, systems summarize older chats, and seek to recall only the important and relevant parts (for the current prompt, task, or goal).
Here’s a quick diagram of how a single agent works with tools

I don’t mean to be too harsh or too literal here. Context Engineering might be defined more broadly as “adding the right information at the right time” and “the art of providing all the context for the task to be plausibly solvable by the LLM.” — Tobi Lutke.
So.. my working definitions for engineers are:
- Context Engineering: any system that adds Context (e.g. text) to the prompt for LLMs.
- Agents (and agentic systems): agents add tool use to AI systems at a minimum, and can be very complex. Using tools for retrieval falls under Context Engineering.
- RAG: retrieval systems which add to the prompt, which was statically written in code (or workflows) but has grown to “Agentic RAG” where it’s dynamic. All of RAG falls under Context Engineering..
Agentic Systems
Agentic Systems have given us the ability to let the LLM call tools and have the agency to plan and decide when that should happen, whereas previous RAG systems were hard-coded, static workflows, that retrieved the specific data at specific times. So now we have a concept of “Agentic RAG” where the same retrieval tools are made available to the agents to call at will. The advent of MCP has greatly motivated moving away from fixed, hard-coded, workflows to dynamic agentic systems, which give the end user far more power.
Here’s a diagram of how an agent works with RAG and tool use:

One Wrinkle: Context Engineering doesn’t completely overlap with Agents or agentic technology, but does cover how agents get additional information by using tools (tool-calling, tool-use, MCP). This is because agents often “do stuff” to effect change, so some of their tool use is to take action but not to gather more context. Also, the picture above encompasses a single agent for simplicity, and many agentic systems have many agents which interact with each other.
Some of the many pieces of agentic systems are shown in the picture below.

Thanks for reading, questions are great, and please feel free to comment!
Some References
- 2025–01: Earliest post I found: All Software Engineers Will Become Context Engineers by Imran Peerbhai
- 2025–01–15: Context Engineering: Why Feeding AI the Right Context Matters by Sean King (Inspired Nonesense) of https://humancontext.ai/
- 2025–04–19: Post on X by Ankur Goyal. “As models get more powerful,…”
- 2025–06–12: Principles of Context Engineering by Walden Yan
- 2025–06–18: Post on X by Tobi Lutke, Shopify CEO. “I really like the term “context engineering” over prompt engineering.”
- 2025–06–23 The rise of “context engineering” by Harrison Chase of LangChain
Related Video
- Chroma Context Engineering Conference included a number of talks.
- LangChain talking about Context Engineering for Agents.

My Context
I’ve been working on and learning about AI applications for a year and half. Because I have many friends who haven’t started to work with AI, I find myself discussing these things often.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.