
Intro To AI Agents And LangGraph
Author(s): Teja Yerramsetty
Originally published on Towards AI.

If you want to leverage the full power of an LLM, the most effective way to do it is to incorporate the LLM in a software flow. This allows us to go beyond the simple question answering mode and make the LLM do more complex things. Broadly speaking, if the path that the LLM can take to affect a change is fixed, itβs called a Workflow. If we are letting the LLM decide which path to take and when itβs done, itβs called an Agent. LangGraph is a Python framework that lets us effectively build Workflows and Agents.
This article is a guide to the basic building blocks of LangGraph and put them to use in some common AI Agent patterns. I will stick to the high level API and framework elements and simple examples. This article is meant to be an easy entry point for most people to learn about LangGraph and Agents.
Graph Elements
We begin by going over the generic graph elements available in LangGraph.

Nodes are the main functional blocks in the graph and are defined through python functions.
Edges connects Nodes in the graph and requires no definition.
Conditional Edge is a special type of edge that directs the flow from a source node to one or more destination nodes.
State is the current state of the graph. It accumulates all the messages from the user and nodes. We can add additional terms like counters to state as well.
The edges carry this state from Node to Node and the Nodes use and update the state.

There are a few types of nodes commonly used to make this generic graph into an AI Agent.

The LLM node runs inference on messages in the state. The LangGraph API allows us to easily add prompts or tool definitions. Tools are just functions that can be called by the LLM. We can add a tool decorator and a docstring that defines what the function does and the arguments. The framework uses this function and makes it available to the LLM through appropriate prompting. The Tool Node looks for these tool calls from the LLM, executes the function, and adds the result to the state.
We can put these building blocks together into an AI Agent. We will build the Agent to follow this graph.

Example Agent Graph
For our example, letβs add an ability to the LLM which it definitely does not have, look up the current time and date. This is a great demonstration of the increased capabilities of an Agent, even with a small number of tools, over a standard LLM
Here is how this agent responds to user queries.
You: What's the time?
Assistant:
-------------
human: What's the time?
ai:
Tool Calls:
- get_current_time: {}
tool: "10:49 PM, June 16, 2025"
ai: The current time is 10:49 PM, June 16, 2025.
How about a more complex question. Below is an example where we ask what fruits are in season. Remember that the LLM can only look up the time and date, so we tell it where we are.
You: what are some fruits that are in season in California
Assistant:
-------------
human: what are some fruits that are in season in California
ai:
Tool Calls:
- get_current_time: {}
tool: "10:50 PM, June 16, 2025"
ai: Based on the current time and location (California), here are some fruits that are in season:
1. **Strawberries**: From March to July, strawberries are at their peak ripeness and availability.
2. **Avocados**: Avocados are a year-round crop in California, but the peak season is from May to October.
3. **Peaches**: Peaches are in season from June to August, with the best varieties coming from the San Joaquin Valley.
4. **Plums**: Plums are available from May to July, with Japanese plums being one of the most popular varieties.
5. **Apricots**: Apricots are in season from April to June, with the peak season being in May.
6. **Nectarines**: Nectarines are similar to peaches and are also in season from June to August.
7. **Grapes**: Grapes are available from September to November, with some varieties coming in as early as July.
8. **Apples**: Apples are in season from October to May, with some varieties being available as early as September.
Please note that the availability of these fruits can vary depending on the specific region within California and the weather conditions during the growing season.
Note that while our agent used a tool to get the current date, the list of fruits comes from the LLMβs internal knowledge. This is a great example of how agents combine tool outputs with their existing training to generate answers.
Implementing The Graph In Code
Letβs build the AI Agent. Tools are used to give the LLM the ability to take actions, like changing files, query external data sources, searching the web or even do general math and plotting through a python interpreter.
The code will be in sections, but will add up to a full script. We will use Ollama to run the local LLM. I go through the steps to setup and run local LLMs in my previous article. If you donβt have access to a local Ollama instance, you can just as easily use an API like OpenAi by changing one line.
Local LLMs : A Comprehensive Guide
Imagine deploying sophisticated AI without complex cloud setups or extensive coding. The newest wave of small localβ¦
pub.towardsai.net
Install the dependencies.
pip install langchain langchain-ollama langgraph langchain-core
Import all the required modules.
from typing import TypedDict, Annotated, Dict, Any
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langchain.tools import tool
from langchain_ollama import ChatOllama
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.schema import AIMessage
from langchain_core.messages import ToolMessage
import json
from datetime import datetime
Define the tools. In our example we have one tool that can look up the current date and time. We use the tool decorator from langchain. This decorator lets us define the tool name and parse the docstring to get the tool definition. The framework will need to know what the function does, what the arguments are and their types, the function return and its type. The LLM node outputs tool names as strings, so we also need a dictionary to look up the actual function object using the name.
Below is the definition for the get_current_time tool.
@tool("get_current_time", parse_docstring=True)
def get_current_time() -> str:
"""Get the current time and date.
Returns:
str: Current time and date in human readable format (e.g., "2:30 PM, June 15, 2024")
"""
now = datetime.now()
return now.strftime("%I:%M %p, %B %d, %Y")
tools = [get_current_time]
tool_map = {tool.name: tool for tool in tools}
Next we initialize the state. For this example, itβs just a list of messages. We are going to use Langchain helper functions to append new messages to the state as they come in.
class State(TypedDict):
messages: Annotated[list, add_messages]
Next we define the LLM Node. This node will use the Ollama server as the underlying inference engine. LangGraph provides API for all other cloud LLMs like OpenAI and Gemini. The code block initializes the Ollama interface, adds a system prompt to the messages, binds the tools to the LLM and calls inference on the messages in the state. Consider the bind_tools method. This formats the tool calls in a json format and adds it to the LLM prompt. This allows the LLM to pick the right tool and fill in the arguments. If the LLM chooses to call a tool, the response dictionary will have the βtool_callsβ key which is added to the message that goes into the state.
def llm_node(state: State) -> Dict[str, Any]:
llm = ChatOllama(model="llama3.2:3b")
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="messages")
])
llm_with_tools = llm.bind_tools(tools)
prompt = prompt_template.invoke(state)
response = llm_with_tools.invoke(prompt)
if isinstance(response, dict) and "tool_calls" in response:
return {"messages": [AIMessage(content=response.get("content", ""))], "tool_calls": response["tool_calls"]}
elif hasattr(response, "additional_kwargs") and "tool_calls" in response.additional_kwargs:
return {"messages": [response], "tool_calls": response.additional_kwargs["tool_calls"]}
else:
return {"messages": [response]}
Next we add a node that can execute the tool_calls picked by the LLM. LangGraph has built in ToolNode which does this. We can also add a custom tool execution node. I decided to go with the custom node to demonstrate what happens under the hood.
def tool_node(state: State) -> Dict[str, Any]:
if messages := state.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = tool_map[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}
If we refer back to the graph diagram we set out to build, it contains a conditional edge. This edge routes all the tool calls to the tool_node. If there are no tool calls to process, we want the graph to exit. Like all things in LangGraph this condition is defined through a function. This function returns a string, which is used by the conditional edge to pick the next node through a dictionary lookup.
def should_use_tool(state: State) -> str:
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tool"
return END
We have all the graph nodes implemented in code. Now letβs put them together in a graph and connect up the edges.
graph_builder = StateGraph(State)
graph_builder.add_node("llm", llm_node)
graph_builder.add_node("tool", tool_node)
graph_builder.add_edge("__start__", "llm")
graph_builder.add_conditional_edges(
"llm",
should_use_tool,
{
"tool": "tool",
END: END
}
)
graph_builder.add_edge("tool", "llm")
graph = graph_builder.compile()
We can also add a simple loop to get user input and invoke the graph on that. All the outputs from the nodes are printed at the end.
print("Chat with the assistant (type 'exit' to quit)")
print("-------------------------------------------")
while True:
user_input = input("\nYou: ").strip()
if user_input.lower() == 'exit':
break
result = graph.invoke({"messages": [{"role": "user", "content": user_input}]})
print("\nAssistant:")
print("-------------")
for message in result["messages"]:
if isinstance(message, dict):
print(f"{message['role']}: {message['content']}")
else:
print(f"{message.type}: {message.content}")
if hasattr(message, "tool_calls") and message.tool_calls:
print("\nTool Calls:")
for tool_call in message.tool_calls:
print(f"- {tool_call['name']}: {tool_call['args']}")
For each user query, the LLM decides if it needs to know the current date or time. If it does, we get a tool call as the output. The conditional edge routes the tool call to the tool node. The result is sent back to the LLM. This new tool call result is then incorporated by the LLM in the final answer to the user. We can start simple and just ask for the time.
Common AI Agent Patterns
The agent we built in this article is commonly referred to as a ReAct agent. Short for Reasoning and Acting agent. In fact, LangGraph has a prebuilt ReAct agent that you can use and it has all the functionality we defined.
We can put the basic building blocks of the graph in different configurations to build different types of Agents. Here are a few examples

Multi-Agent Router: This pattern uses a main LLM to route a task to the best specialized agent for the job.
RAG (Retrieval-Augmented Generation): This pattern allows an agent to retrieve information from a private knowledge base (like your own documents) before answering a question.
Self Reflection: This pattern breaks down the problem into multiple steps. The examples shows a two part agent which has Generate node, that generates an output including the reason for the output and then a Relfect node looks at the output and tries to find errors or grade the previous step. We do a few steps of this to refine the output. There can be other nodes like a Planner or a step to breakdown the input.
Additional Resources
The LangChain, LangGraph documentation is an excellent resource to dive deeper into more of the framework functionality. I highly recommend spending some time going through the common use cases. LangGraph might have a pre-built Agent that does exactly what you want.
https://langchain-ai.github.io/langgraph/
The Anthropic blog post is also a great resource to learn more about Agents and Workflows.
https://www.anthropic.com/engineering/building-effective-agents
Conclusion
My goal with this article is to give a brief introduction to Agents and LanGraph. There is a lot more to dig into in terms of both but what Agents deliver is an incredible amount of complex functionality with very little code. But this simple control structure means that things can go very wrong, a lot. We are giving the LLM full control over the flow and this can lead to unexpected outcomes.
I believe the true success of Agents in the real world will depend on applying the same amount of rigor and validation we usually reserve for ML or Deep Learning products. Itβs very easy to build an agent that works, but very hard to build one that works all the time.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI