Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
LLM & AI Agent Applications with LangChain and LangGraph — Part 12: Reasoning, ReAct, and Agents
Latest   Machine Learning

LLM & AI Agent Applications with LangChain and LangGraph — Part 12: Reasoning, ReAct, and Agents

Last Updated on January 2, 2026 by Editorial Team

Author(s): Michalzarnecki

Originally published on Towards AI.

LLM & AI Agent Applications with LangChain and LangGraph — Part 12: Reasoning, ReAct, and Agents

In this chapter we will zoom in on “reasoning” in language models.

My goal is that after this article it’s clear:

  • which models actually plan and infer better,
  • how that differs from the ReAct approach in LangChain,
  • and what it really means when someone says a model “has built-in tools”.

Two big families of models

Let’s start with the models themselves. In practice, the market currently splits into two major families.

1) Reasoning-optimized models

These are models designed to do internal thinking steps before they output an answer. We don’t see that hidden trace, but it’s happening: the model plans, decomposes the problem into stages, chooses a strategy.

They’re usually a bit more expensive and slightly slower — but when the task is complex, they hit the target more reliably.

Examples often mentioned in this category include OpenAI GPT-5, DeepSeek-R1, Claude 5 Sonnet in “thinking” mode, or Gemini 2.5 Pro used in long-context and hard-reasoning scenarios.

2) General-purpose “chat” models

These are great at conversation, text transformations, and information extraction. They’re fast and cheaper. But when you give them tasks that require many steps, they can get lost — unless you provide structure.

Here you’ll often see models like GPT-4o / 4o-mini, Claude 3.5 Sonnet / Haiku, Gemini 1.5 Flash, and in open source: Llama, starting from version 3.

And yes — the line between these groups is getting blurry. “Chat” models reason better with every generation, and weaker ones can be helped a lot if we impose an external structure of work (for example with a little orchestration in LangChain).

And this is exactly where ReAct comes in.

ReAct: not a model type, but a control pattern

ReAct is not a “kind of model”. It’s a steering pattern.

Think of a supervisor guiding an executor step by step:

First think → then take an action → then describe what you observed → and think again.

ReAct builds a loop:

Think → Act → Observe

Inside that loop, the model can call tools: a search engine, calculator, database, code execution, APIs.

And this is the key benefit: even a fast, cheaper model starts working systematically.

Here’s the important distinction:

  • Reasoning inside the model happens internally, inside the network, and is invisible to us.
  • ReAct is external orchestration that we build in the application.

The best results often come from combining both worlds: a stronger reasoning model running inside a ReAct framework, with access to tools and memory.

And one more practical note: it’s not always worth picking the biggest, most powerful model — because a lot of tasks can be done perfectly well using cheaper, mid-tier models, as long as you give them the right structure.

“Built-in tools” — what does that actually mean?

Now, what about “built-in tools”?

In practice, this usually means built-in tool-use capability, not that the tools literally exist inside the model.

Many models understand functions described with a JSON schema and can decide when and how to call them, filling arguments correctly. But the calculator, the internet, SQL, or code execution still live on the application side.

The model doesn’t have a browser inside it. It doesn’t have access to your database by default. It simply says:
“Now use tool X with parameters Y”, and we execute it.

Two quick scenes to visualize the difference

Scene 1:
“Calculate the project cost, identify the biggest risk, and propose a mitigation plan.”
A good reasoning model can handle this solo and produce a coherent plan.

Scene 2:
“First find three reliable sources, then compute a few variants with a calculator, and finally build a comparison table.”
Here ReAct can guide a fast model step by step — calling the right tools and collecting observations.

So in summary:

  • Reasoning in the model = internal thinking inside the network
  • ReAct = external director of steps + tool usage
  • “Built-in tools” = the built-in competence to call functions (not the tools themselves)

OK — so what is a ReAct Agent?

We already know what “reasoning” is, but what exactly is a ReAct Agent?

In LangChain, an agent is a special mechanism that lets the model decide on its own which steps to take and which tools to use in order to answer a question.

One of the most commonly used agent types in LangChain is the ReAct Agent.

The name “ReAct” comes from two words: Reasoning and Acting.

How does it work?

  • First, the model reasons — it analyzes the user’s question and chooses a strategy.
  • Then it acts — selects the right tool, runs it, analyzes the result, and decides what to do next.
  • The process repeats until the model decides it has enough information to produce the final answer.

Thanks to this, the agent is no longer just a “text generator” — it becomes an active problem solver.

ReAct Agent gives the model the ability to think iteratively and use tools as if it were solving the task step by step.

A few simple examples

  • If the user asks for the current EUR/USD exchange rate, the model understands it doesn’t know the rate internally, so it picks a tool like get_exchange_rate.
  • If the user asks to multiply two numbers, the model understands it should use a calculator instead of doing it “in its head”.
  • If the user asks for the current contact details of a company, the model can use a knowledge base or a search tool.

This independent planning is what separates an agent from a regular chain.

The agent workflow in three stages

  1. System instructions — tell the agent which tools exist and how it should behave.
  2. Reasoning loop — the model iterates: “I have this question; to answer it I need tool X, then tool Y…”
  3. Final Answer — once it has the needed data, it returns to the user with the final response.

In the agent logs we can often see the sequence of decisions and actions very clearly.

In the notebook, I’ll demonstrate how to create a simple ReAct Agent, how to add tools to it, and how the agent decides on its own what to do to answer our questions.

Let’s move to the code.

Install libraries

!pip install -q langchain langchain-openai python-dotenv

Import libraries and prepare model API client

from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()

llm
= ChatOpenAI(model="gpt-4o", temperature=0)

How does the model use tools?

Here is example how to implement interaction between model and tool using regular expressions and asking model to respond with specific syntax when function call is needed.

import re

# tool - tool
def greet_user(name: str) -> str:
"""A function that greets the user by name."""
return f"Hello, {name}! Nice to meet you!"

prompt = """
Answer the user's question.
If the user introduces themselves, use the format: GREET[name] and do not use the user's name again.

Question: Hi, my name is Michael and I'd like to know which came first - the egg or the nest?
"""


response = llm.invoke(prompt)
response_text = response.content

pattern = r'GREET\\[(\\w+)\\]'
match = re.search(pattern, response_text)

# If pattern found, call the tool-function
if match:
name = match.group(1)
print("GREET pattern detected, calling function-tool...")
result = greet_user(name)
print(f"Function result: {result}")
response_text = response_text.replace(match.group(0), result)
else:
print("GREET pattern not found in response.")

print("LLM response:")
print(response_text)

output:

GREET pattern not found in response.
LLM response:
GREET[Michał] The egg came first. Eggs have been around for hundreds of millions of years, laid by species that existed long before birds and their nests evolved.

Define tools to by used by ReAct agent

from langchain_classic.tools import tool

@tool
def add_numbers(**kwargs:int) -> int:
"""Add numbers together."""
return sum(kwargs['kwargs'].values())

@tool
def company_info(name: str) -> str:
"return info about company"
data = {
"OpenAI": "An artificial intelligence company, creator of ChatGPT. 10 years old.",
"LangChain": "A framework for building AI applications with LLM. 4 years old.",
"Tesla": "An electric car manufacturer. 23 years old."
}
return data.get(name, "I have no information about this company.")

tools = [add_numbers, company_info]

Setup and run ReAct agent

from langchain_classic.agents import initialize_agent, AgentType

agent = initialize_agent(
llm=llm,
tools=tools,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=6
)

agent.run("Tell me what LangChain, Tesla, and OpenAI do. Then tell me the total number of years these three companies have been in business.")

output:

/tmp/ipykernel_18659/3571289280.py:3: LangChainDeprecationWarning: LangChain agents will continue to be supported, but it is recommended for new use cases to be built with LangGraph. LangGraph offers a more flexible and full-featured framework for building agents, including support for tool-calling, persistence of state, and human-in-the-loop workflows. For details, refer to the [LangGraph documentation](https://langchain-ai.github.io/langgraph/) as well as guides for [Migrating from AgentExecutor](https://python.langchain.com/docs/how_to/migrate_agent/) and LangGraph's [Pre-built ReAct agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/).
agent = initialize_agent(
/tmp/ipykernel_18659/3571289280.py:11: LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain-classic 0.1.0 and will be removed in 1.0. Use `invoke` instead.
agent.run("Tell me what LangChain, Tesla, and OpenAI do. Then tell me the total number of years these three companies have been in business.")


> Entering new AgentExecutor chain...
Thought: I will first gather information about each company: LangChain, Tesla, and OpenAI. Then, I will calculate the total number of years they have been in business.
Action:
```
{
"action": "company_info",
"action_input": {
"name": "LangChain"
}
}
```
Observation: A framework for building AI applications with LLM. 4 years old.
Thought:To answer your question, I will gather information about Tesla and OpenAI, and then calculate the total number of years these companies have been in business.

Action:
```
{
"action": "company_info",
"action_input": {
"name": "Tesla"
}
}
```

Observation: An electric car manufacturer. 23 years old.
Thought:Action:
```
{
"action": "company_info",
"action_input": {
"name": "OpenAI"
}
}
```

Observation: An artificial intelligence company, creator of ChatGPT. 10 years old.
Thought:Action:
```
{
"action": "Final Answer",
"action_input": "LangChain is a framework for building AI applications with large language models (LLM) and is 4 years old. Tesla is an electric car manufacturer and is 23 years old. OpenAI is an artificial intelligence company, known for creating ChatGPT, and is 10 years old. The total number of years these three companies have been in business is 37 years."
}
```

> Finished chain.
'LangChain is a framework for building AI applications with large language models (LLM) and is 4 years old. Tesla is an electric car manufacturer and is 23 years old. OpenAI is an artificial intelligence company, known for creating ChatGPT, and is 10 years old. The total number of years these three companies have been in business is 37 years.'

That’s all in this chapter dedicated to ReAct agent.
In next chapter we will experiment with multimodal models which can not only analyze and generate text but also other formats like for example images.

see next chapter

see previous chapter

see the full code from this article in the GitHub repository

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.