Understanding AI Agentic Patterns
Last Updated on October 18, 2025 by Editorial Team
Author(s): Bhargav __
Originally published on Towards AI.

AI agents sound complex, but the idea is simple: they’re programs that decide their next step. This guide explains the few patterns that show up in real products — without much complexity— and ends with a tiny agent you can run today.
What is an agent?
An agent is software that decides its next step as it goes. It can plan, use tools, check what happened, and choose what to do next — until the job is done. A workflow is different: it follows a fixed path you wrote ahead of time.
A simple way to picture it
Workflow: cooking by a strict recipe
Agent: a chef who tastes, adjusts, and decides the next step while cooking.
What powers an agent?
- Tools (APIs, search, code exec).
- Memory (what to keep across steps).
- Retrieval (look up facts when needed).
These turn a plain LLM into an “augmented LLM” that can act, not just chat.
Workflow patterns vs Agent patterns
Workflows are pre-planned; Agents are adaptive.
The Workflow Family (deterministic & reliable)
Workflows follow a path you design up front. They’re fast, cheap, and easy to debug.
- Chain (a.k.a. prompt chaining):
What: Break the job into steps you run in order:
clean → retrieve → draft → polish.
Use when: Steps are stable; you want predictable outputs and easy logs.
Scenario: Turn messy meeting notes into a clean email with action items.
Workflow: Clean the transcript → Pull key decisions → Extract action items → Draft the email → Polishing (tone, length).
Why this fits: The steps are predictable and always in the same order.
2. Router:
What: Classify the input, then choose the right path/model/tool
Use when: Inputs vary a lot and need different handling or model sizes.

Scenario: One inbox handles coding questions and writing requests.
Workflow: Classify message (code/prose/other) → Send to the matching handler → Fallback to safe default if low confidence.
Why this fits: Different inputs need different prompts/tools and a safe fallback.
3. Fan-out / Fan-in (parallel):
What: Do work in parallel to gain speed (split the task) or confidence
Use when: Subtasks are independent.

Scenario: Summarizing a 100-page report quickly.
Workflow: Split into sections → Summarize in parallel → Merge summaries → Quick consistency edit.
Why this fits: Independent subtasks let you cut latency and scale cleanly.
4. Orchestrator → Workers:
What: A coordinator plans subtasks and delegates to workers/tools, then merges results.
Use when: You can’t list all subtasks upfront; roles are clear (research → write → fact-check).

Scenario: Build a weekly market brief.
Workflow: Orchestrator plans topics → Workers fetch/search/summarize → Orchestrator synthesizes → Fill gaps if needed.
Why this fits: Subtasks emerge as you learn; delegation keeps the flow organized.
5. Draft → Evaluate → Revise (Evaluator loop):
What: A writer makes output; an evaluator checks against rules; iterate until it passes.
Use when: You can write explicit pass/fail criteria (length, tone, citations).

Scenario: Write a LinkedIn post that follows strict rules.
Workflow: Draft post → Evaluate against checklist (≤120 words, CTA, no jargon) → Revise with feedback → Stop on pass or retry cap.
Why this fits: Clear pass/fail criteria make iterative improvement reliable.
The Agent Family (adaptive & tool-using)
Agents are typically implemented as an LLM performing actions (via tool-calling) based on environmental feedback in a loop. They decide the next step as they go. They handle messy, branching tasks — but need guardrails.

- Tool-using loop (ReAct-style):
What: Plan → Act (tool) → Observe → Decide. The model plans a step, calls a tool, observes the result, and chooses what to do next.
Use when: Steps depend on intermediate results; heavy tool use; exploratory tasks.
Guardrails: max_steps, timeout, allowed_tools, and human-in-loop ( low confidence ).
Scenario: Adaptive gathering and writing based on live feeds.
Agent: Plan sources to query → Call APIs/search → Rank →Write blurbs → If rate-limited or low confidence, switch sources or ask human.
Why this fits: Next steps depend on live data and multi-turn tool use.
2. Graph-Based Agent (Explicit State Machine):
What: Same loop, but expressed as a graph: nodes = skills/checks, edges = transitions. So instead of running in a vague loop like “think → act → check → think again,” it follows a clear path of states, which you can visualize, debug, and resume.
Why: Better observability, retries, and “resume from a particular node X” than a free-form loop.
Guardrails:
Deterministic edges where possible — Avoid random jumps; each outcome must map to a known next node.
Typed state — Pass a consistent data object between nodes)
Dead-end detection — If there’s no valid next node, stop cleanly and log an error.
Scenario: Refund request handling with policy checks.
Agent: Intake → Verify order → Policy check → (Approve / Ask for info / Deny) → Notify; retry on API timeouts, human check on low confidence.
Why this fits: You need deterministic transitions, observability, and safe retries.
3. Team Agent (roles with hand-offs):
A Team Agent is an agent system where different roles handle different parts of the task, just like a small team of specialists. Each agent (or sub-agent) has a defined job, takes an input, and produces a clear output for the next role.
Why: Specialization, Clarity, Parallelism, and Scalability.
Guardrails:
Strict interfaces — Each role receives a defined input and must output a structured result (e.g., JSON or markdown section).
Shared memory — All roles can access the same source data or conversation context.
Stop policy — Limit hand-offs to avoid infinite review loops; define clear “done” criteria.
Scenario: Creating a polished weekly tech newsletter.
Team Agent: Multiple Roles with hand-offs
Planner: Decides topics and assigns sections.
Summarizer: Writes short blurbs for each topic.
Reviewer: Checks facts, tone, and links.
Editor: Refines flow and style.
Publisher: Formats and sends the final newsletter.
Building Your First AI Agent Using the Evaluator Pattern
Let’s actually build a small agent that can write, review, and improve its own output — automatically.
The Evaluator Pattern is a feedback loop between two roles:
- A Generator that creates an answer
- An Evaluator that checks whether it meets your rules
If the answer doesn’t pass, the agent rewrites it — just like a human revising a paragraph until it’s right.
Step 1 — Define the Generator
Here’s a simple generator that asks the model to write a one-line professional product description for a smartwatch:
from openai import OpenAI
client = OpenAI()
def generate_description():
prompt = """
Write a one-line professional product description
for a smartwatch. Keep it under 25 words.
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
Step 2 — Define the Evaluator
We’ll create a second function that reviews what the generator wrote.
def evaluate_description(description):
prompt = f"""
Review the following product description:
"{description}"
Evaluation Rules:
1. Is it under 25 words? (yes/no)
2. Does it mention at least one benefit? (yes/no)
3. Does it sound professional? (yes/no)
Return feedback if any rule fails.
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
Step 3 — Combine into a Loop
Now we are connecting both pieces into a loop.
This loop keeps generating → evaluating → improving until the description passes all checks or reaches a retry limit.
def evaluator_agent(max_retries=3):
for attempt in range(max_retries):
description = generate_description()
feedback = evaluate_description(description)
print(f"Attempt {attempt + 1}: {description}")
print(f"Evaluator Feedback: {feedback}\n")
# if all answers are "yes", it's a pass
if "yes" in feedback.lower() and "no" not in feedback.lower():
print("Passed evaluation!")
return description
# else, improve based on feedback
correction_prompt = f"Improve this description based on feedback: {feedback}"
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": correction_prompt},
{"role": "assistant", "content": description}
]
)
description = response.choices[0].message.content
print("Max retries reached. Returning best attempt.")
return description
Running this loop shows the agent improving its drafts step by step, applying feedback each time until it meets every rule and outputs a polished final version.
Summary
We’ve covered a lot — from understanding what an agent really is, to exploring the key workflow and agent patterns that shape modern AI systems, and building our built our first Agent using the Evaluator Pattern.
Together, these form the foundation of how real-world AI agents are built — combining structure, reasoning, and feedback.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.