What Are AI Agents — and How Should We Think About Them?
Last Updated on April 15, 2025 by Editorial Team
Author(s): Tomer Simon, PhD
Originally published on Towards AI.
This article is adapted from a thread I originally shared on X, in response to the growing conversation around AI agents — also known as Agentic AI.
Their emergence is reshaping how we think about artificial intelligence, and I wanted to offer a short explainer on what they are, what they do, and how to think about them moving forward.
Let’s begin with a simple but clear diagram that outlines the three waves of AI, which I find especially useful.
I accept this framing — and more importantly, what it represents. For the past few years, I’ve been ending many of my presentations with the quote:
“You can’t stop the waves, but you can learn to surf.” — Jon Kabat-Zinn
This quote now connects directly to the growing need for AI literacy, but it goes deeper than that.
Throughout history, humans have tried to understand reality by investigating it — whether as philosophers or scientific researchers.
These waves, in my view, describe the changes in our abilities to engage with reality:
- The first wave, traditional AI (GOFAI), allowed us to analyze reality and make predictions about it.
- The second wave, generative AI, enabled us for the first time to create reality and interact with it.
- The third wave, agentic AI, now enables us to manage and operate our reality.
No matter how advanced the language models are, they are still closed off inside themselves — regardless of the size of the data they were trained on.
To manage and operate reality, AI — whether traditional or generative — requires new capabilities that didn’t exist before.
First and foremost, it must be able to perform actions in the “real” world — our world.
That could mean interacting with other systems, with the entire organizational domain, or the broader internet.
The recent explosion of interest in MCP (Multimodal Communication Protocol) is exactly that:
How can we allow these models to connect, integrate, and act within different systems and domains?
In the early stages, agents included the tools they needed within themselves.
But to avoid re-developing tools again and again — and to ease the access and usage of tools by language models — Anthropic developed the MCP protocol.
The ability to reach out and act beyond the language model is incredibly powerful — but it’s not enough.
And this brings me back to the term AI agent, and why I don’t fully accept it.
The term agent comes from the concept of agency — which is about the capacity to define goals and the path to achieve them. It also connects directly to the idea of sovereignty — that we are in control of our own fate.
So, how do we provide this kind of agency to AI?
We need it to have autonomy — to define its own goals and take action to achieve them.
That’s where we must ask ourselves – what level of human oversight do we want or need?
In order to make decisions in the context of tasks assigned to it, the AI must be able to reason.
And that connects directly to a major trend from the past five months: the emergence of reasoning language models.
In some cases, we may even need to run two different models that collaborate — one for planning and reasoning, and the other for execution — to achieve the intended goal.
And this is exactly where the concept of AI alignment becomes critical.
Now that AI systems can have agency and autonomy, we must ensure that their goals — and the methods they choose to achieve them — are aligned with human values, intentions, and safety requirements.
This is what the field of AI alignment is all about:
Making sure that powerful AI systems do what we want them to do, even as they become more independent and capable.
Whether in organizational settings, complex systems, or broader societal contexts, AI alignment is already not just a theoretical concern.
Three Levels of Human Involvement in AI Automation
- Human in the loop — the most common way we work with generative AI today: we open one of our favorite chatbots, ask for something, get a response — but all decisions and actions remain with us.
- Human on the loop — the AI has some level of autonomy and recommends what action should be taken. The human has the ability to override or change that decision.
- Human off the loop — the AI both decides and acts independently, updating the human or organization afterward.
About three years ago, the U.S. military published its doctrine on force deployment in the AI era — before anyone really knew where AI was headed.
The doctrine is based on the following process:
Sense → Make Sense → Decide → Act
And I recommend adopting this framework for both organizations and AI systems you’re planning or building.
The doctrine is based on the following process:
Sense → Make Sense → Decide → Act
And I recommend adopting this framework for both organizations and AI systems you’re planning or building.
This process also helps determine what level of human oversight or participation is appropriate.
To illustrate, let’s unpack the Iron Dome system:
- The radar systems detect incoming threats (Sense)
- The system core analyzes whether a rocket is likely to land in an open field or a populated area (Make Sense)
- The system decides which threats to intercept and where alerts should be activated (Decide)
- It then acts — launching interceptors toward rockets headed for populated areas (Act)
Now we can ask ourselves: where do we want — or need — human involvement?
Just in the final action? Or also in the decision itself?
A Contemporary Example: Writing Code with Cline
Today, writing code also demonstrates this pattern.
One of the leading extensions for developers is called Cline (an AI-powered coding assistant), which operates in two stages – planning and execution.
When you ask it to write some code, it first learns your request, then plans and suggests a detailed course of action.
That’s automation of the first three steps – Sense → Make Sense → Decide.
At the end of that stage, you receive a full, proposed work plan — which the human can accept or modify. The action (writing the code) is still theirs.
This is another clear example of human on the loop, although the initial trigger came from the human.
Back to AI Agents
As mentioned, we now have new kinds of reasoning-focused models.
And just recently, a new term joined the AI vocabulary: inference time-scaling.
It may sound complex, but it’s actually very simple:
How much time do you allow your model to “think” before it gives you an answer?
It turns out that for certain tasks, when you give the model more time to think, the results are better and more accurate.
It’s similar to a manager asking you a question — some questions you can answer instantly, but others require you to pause, think, and reflect.
So when designing your AI agents, consider Daniel Kahneman’s dual-process theory:
- System 1 — fast, emotional, intuitive thinking
- System 2 — slow, deliberate, rational thinking
In Conclusion
It’s critical to carefully consider which tasks actually require an AI agent — and which do not.
There is currently too much ease and temptation in developing agents for tasks that don’t need them at all.
I recommend focusing on long-running, dynamic processes as the most appropriate use cases for AI agents — not short, one-off tasks.
For those, we probably don’t need autonomy, agency, reasoning capabilities, and the like.
We are moving quickly toward multi-agent systems that will operate inside our organizations — with and alongside human employees.
That means agents won’t just be working through APIs or other systems. They’ll be working with us.
They may ping us on Teams, send emails, or even call us — and this is already happening.
We’re quickly approaching a world where AI agents won’t just operate our systems — they’ll collaborate with us, work alongside us, and maybe even assign us tasks.
Understanding their nature, designing them with care, and aligning their goals with ours isn’t optional.
It’s the next step in how we work.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI