Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
LLM & AI Agent Applications with LangChain and LangGraph — Part 24: Connecting LangGraph with LLMs
Latest   Machine Learning

LLM & AI Agent Applications with LangChain and LangGraph — Part 24: Connecting LangGraph with LLMs

Last Updated on January 5, 2026 by Editorial Team

Author(s): Michalzarnecki

Originally published on Towards AI.

LLM & AI Agent Applications with LangChain and LangGraph — Part 24: Connecting LangGraph with LLMs

Hi. In the previous part we built a simple graph that performed math operations step by step.
That’s a good start — but the real power of LangGraph appears when we connect graph nodes with large language models, tools, and conditional logic.

Nodes can use LLMs

In LangGraph, every node can be a simple Python function — but it can also call an LLM.

That means we can build nodes that summarize text, answer questions, classify content, or generate new content. Each node becomes a modular “capability” that we can plug into a bigger workflow.

Tools as separate nodes

We can also add a tool as a dedicated node in the graph.

This gives the agent running inside the graph access to additional functions — like a calculator, a weather API, a database query, or a custom business function.

Conditional edges

LangGraph supports conditional edges.

This means that depending on the current state, we can route the flow to a different node. For example, the model can check whether an answer is correct — and if not, move to another node that retries, fixes, or re-generates the response.

Loops

And more: LangGraph allows loops.

We can define rules like: if the output doesn’t meet the criteria, go back to the previous node and try again. This is especially useful when implementing retry and fallback mechanisms.

A simple graph: LLM + calculator + loop

Now we’ll build a simple graph where:

  1. A node called ask_llm asks the model to solve a math question.
  2. A node called tool_calc verifies the result using a calculator.
  3. If the result is wrong → we go back to the LLM (loop)
  4. If the result is correct → we move to the final node and show the answer.

Once you combine an LLM, tools, conditional edges, and loops, the graph becomes truly flexible.

This allows you to build agents that don’t just answer — but can also verify their work and correct mistakes.

Alright — let’s move to the code.

1. Install libraries and setup LLM API client

from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from dotenv import load_dotenv

load_dotenv()

# Model LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

2. Define the state

class State(TypedDict):
question: str
llm_answer: str
is_correct: bool

3. Define nodes

LLM node

prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful math assistant. Please respond only with numerical results."),
("user", "{question}")
])

def ask_llm(state: State) -> State:
response = (prompt | llm).invoke({"question": state["question"]})
return {"llm_answer": response.content, "is_correct": False}

Tool node

def tool_calc(state: State) -> State:
try:
correct = eval(state["question"])
user_answer = int(state["llm_answer"])
return {"is_correct": (correct == user_answer)}
except Exception:
return {"is_correct": False}

Finish node

def finish(state: State) -> State:
if state["is_correct"]:
print(f"✅ Correct answer: {state['llm_answer']}")
else:
print("❌ Could not get the correct answer.")
return state

4. Prepare the graph

graph = StateGraph(State)

graph.add_node("ask_llm", ask_llm)
graph.add_node("tool_calc", tool_calc)
graph.add_node("finish", finish)

graph.set_entry_point("ask_llm")
graph.add_edge("ask_llm", "tool_calc")

# Condition: if correct -> finish, if not -> ask_llm again
def check_answer(state: State):
return "finish" if state["is_correct"] else "ask_llm"

graph.add_conditional_edges("tool_calc", check_answer, ["finish", "ask_llm"])
graph.add_edge("finish", END)

app = graph.compile()

5. Run workflow

app.invoke({"question": "16 * 12"})

output:

✅ Correct answer: 192
{'question': '16 * 12', 'llm_answer': '192', 'is_correct': True}

Graph visualization

from IPython.display import Image, display

png_bytes = app.get_graph().draw_png()
display(Image(png_bytes))

That’s all int this chapter dedicated to LangGraph + LLM usage. In the next chapter we will dive deeper into agentic workflows and check on different graph patterns for AI agents.

see next chapter

see previous chapter

see the full code from this article in the GitHub repository

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.