Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

From Tool Chaos to Intelligent Agents: LangChain and LangGraphs Frameworks
Latest   Machine Learning

From Tool Chaos to Intelligent Agents: LangChain and LangGraphs Frameworks

Last Updated on September 9, 2025 by Editorial Team

Author(s): Neha Manna

Originally published on Towards AI.

From Tool Chaos to Intelligent Agents: LangChain and LangGraphs Frameworks

Content

  1. The Scenario: Too Many Tools, Not Enough Intelligence
  2. How LangChain Solves Real Problems in AI Workflows
    2.1 Enter LangChain: The Operating System for AI Agents
    2.1.1 Chains: The Recipe Book for AI Logic
    2.1.2 Tools: The Plug-ins Your Agent Can Use
    2.1.3 Memory: The Context Engine
    2.1.4 Agents: The Autonomous Orchestrators
    2.2. Real-World Use Cases
    2.2.1 AI Copilot for Data Analysis
    2.2.2 Autonomous Customer Support
    2.2.3 Workflow Automation
    2.2.4 What You Need to Get Started
    2.3. Strengths & Limitations
  3. LangGraph: Bringing Structure to Intelligent Agent Workflows
    3.1 Why LangGraph?
    3.2 How LangGraph Works
    3.3 LangGraph in Action: A Simple Example
    3.4 When to Use LangGraph
  4. LangChain + LangGraph: A Natural Progression
  5. Getting Started with LangChain
    1. Install LangChain
    2. Choose Your LLM Provider
    3. Set Up Memory Backend
    4. Define Tools
  6. Build Chains and Agents
    1. Install LangGraph
    2. Import and Define Your Graph

The Scenario: Too Many Tools, Not Enough Intelligence

You’re a tech lead building an AI-powered assistant for your team. You’ve got:

  • An LLM that can answer questions
  • A vector database for semantic search
  • APIs for internal tools like Jira, Confluence, and Slack
  • A bunch of Python scripts for automation

But here’s the problem: none of it works together intelligently.

You’re stuck wiring up brittle pipelines, hard-coding logic, and juggling context between services. Your assistant can answer a question, but it can’t remember what you asked yesterday. It can call an API, but it doesn’t know when or why to use it. You’re building workflows, not intelligence.

How LangChain Solves Real Problems in AI Workflows

To truly be useful, your AI assistant needs to see, remember, think, and act.

That’s exactly what LangChain enables for AI agents.

Enter LangChain: The Operating System for AI Agents

LangChain is the framework that turns your scattered tools and models into a cohesive, intelligent agent. It’s like giving your assistant a brain, memory, and a toolkit — and teaching it how to think.

Let’s break down how LangChain solves the perception–memory–planning–action loop using analogies that resonate with developers.

Chains: The Recipe Book for AI Logic

Think of chains as reusable functions or workflows. They’re like recipes that tell your agent what steps to follow:

  • Query a vector store
  • Pass results to an LLM
  • Summarize the output

Chains let you compose logic without reinventing the wheel. They’re deterministic, modular, and great for building structured flows.

Tools: The Plug-ins Your Agent Can Use

Tools are like plug-ins or microservices your agent can call dynamically. Instead of hard-coding API calls, you register tools like:

  • search_google()
  • query_jira()
  • run_python_code()

Your agent decides when and how to use them. It’s like giving your assistant access to a toolbox — and letting it choose the right tool for the job.

Memory: The Context Engine

LangChain’s memory system is like a context engine. It stores embeddings, documents, and conversation history in vector databases like FAISS or Chroma.

This means your agent can:

  • Recall past interactions
  • Reference previous documents
  • Maintain continuity across sessions

It’s not just stateless prompting — it’s stateful intelligence.

Agents: The Autonomous Orchestrators

Agents are the brains of the operation. They interpret user intent, plan the next steps, choose tools, and manage memory.

Think of them as autonomous orchestrators:

  • They perceive input (via LLMs)
  • Consult memory (vector search)
  • Plan actions (reasoning loops)
  • Use tools (APIs, functions)

LangChain supports agent types like ReAct and MRKL, enabling reasoning and tool use in dynamic environments.

Real-World Use Cases

Here’s how LangChain powers intelligent workflows:

AI Copilot for Data Analysis

  • Understands your query
  • Retrieves relevant datasets
  • Summarizes insights
  • Remembers your preferences

Autonomous Customer Support

  • Detects customer intent
  • Maintains conversation history
  • Escalates or resolves using APIs

Workflow Automation

  • Monitors triggers
  • Plans multi-step operations
  • Executes across services like Zapier, Slack, or internal APIs

What You Need to Get Started

To build with LangChain, you’ll need:

  • An LLM provider: OpenAI, Claude, Mistral
  • A memory backend: FAISS, Chroma, Weaviate
  • Tools: APIs, search engines, code runners
  • A runtime: Python, JavaScript, or LangServe

LangChain is model-agnostic and integrates seamlessly with external services.

Strengths & Limitations

Strengths:

  • Modular and extensible
  • Ideal for rapid prototyping
  • Strong community and documentation

Limitations:

  • Can require boilerplate for advanced orchestration
  • Agent logic may feel limited compared to graph-based frameworks

LangGraph: Bringing Structure to Intelligent Agent Workflows

So far, we’ve seen how LangChain enables agents that can see, remember, think, and act. But what happens when your agent needs to loop, branch, retry, or adapt based on changing conditions?

That’s where LangGraph comes in.

LangGraph extends LangChain by modeling agent workflows as graphs — where each node represents a reasoning or action step, and each edge defines how decisions or data flow between those steps. It’s like upgrading your agent from a linear script to a stateful, adaptive flowchart.

Why LangGraph?

Real-world agents aren’t linear. They:

  • Retry when APIs fail
  • Branch based on user input or tool output
  • Loop through planning and action until a goal is met

LangChain alone can struggle with this kind of control flow. LangGraph solves it by introducing:

  • Stateful nodes with memory access
  • Flexible edges for branching and looping
  • Retry logic baked into the graph structure

How LangGraph Works

LangGraph builds on LangChain’s primitives:

  • You still use LangChain tools, chains, agents, and memory
  • But now, you wrap them inside a graph structure that adds flow control and persistence

Each node in LangGraph can:

  • Run a LangChain chain or agent
  • Access memory (e.g., FAISS, Chroma)
  • Decide what node to go to next based on output or state

This makes it easy to build agents that plan, act, and replan — without writing brittle, procedural code.

LangGraph in Action: A Simple Example

Let’s say you’re building a customer support agent. You define two nodes:

  • plan: decides what to do next
  • act: executes the tool or API call

You connect them like this:

  • plan → act
  • act → plan (for retries or adjustments)
from langgraph.graph import StateGraph

# Define a custom state class to hold agent state
class MyState:
def __init__(self, user_input=None, tool_result=None):
self.user_input = user_input
self.tool_result = tool_result

# Define the planning step
def plan_step(state: MyState) -> MyState:
# Example: decide what tool to use based on input
print("Planning based on:", state.user_input)
state.tool_result = "decided_tool"
return state

# Define the action step
def tool_use_step(state: MyState) -> MyState:
# Example: simulate tool execution
print("Using tool:", state.tool_result)
state.user_input = "re-evaluate" # Simulate need to replan
return state

# Create the graph
workflow = StateGraph(MyState)

# Add nodes
workflow.add_node("plan", plan_step)
workflow.add_node("act", tool_use_step)

# Define edges
workflow.add_edge("plan", "act")
workflow.add_edge("act", "plan") # Retry loop

# Compile and run
agent_graph = workflow.compile()
initial_state = MyState(user_input="start")
agent_graph.invoke(initial_state)
  • plan_step: Decides what to do next based on the current state.
  • tool_use_step: Executes the chosen tool and updates the state.
  • Graph edges: Create a loop from plan → act → plan, allowing retries or re-evaluation.

With just a few lines of code, you’ve built a loop that lets your agent think, act, and rethink — just like a human would.

When to Use LangGraph

Use LangGraph when your agent needs:

  • Loops, retries, or branching logic
  • Multi-agent orchestration
  • Stateful workflows that evolve over time
  • Modular design for scaling complexity

LangGraph is especially powerful for:

  • Multi-turn customer support flows
  • Adaptive tutoring agents
  • Workflow bots that respond to events
  • Simulation environments with agent loops

LangChain + LangGraph: A Natural Progression

LangChain isn’t just a framework — it’s a thinking layer for your AI systems. It bridges the gap between LLMs and real-world applications by enabling agents that see, remember, think, and act.

If you’re tired of stitching together brittle pipelines and want to build intelligent, adaptive systems, LangChain is the toolkit you’ve been waiting for.

LangGraph doesn’t replace LangChain — it extends it. You can reuse your existing LangChain components inside LangGraph nodes, adding structure without reengineering your stack.

If you’re already using LangChain and want more control, robustness, and reasoning power, LangGraph is the next step.

Getting Started with LangChain

1. Install LangChain

pip install langchain

2. Choose Your LLM Provider

You’ll need access to an LLM like:

  • OpenAI
  • Anthropic Claude
  • Mistral

Set up your API keys and configure the model in LangChain.

3. Set Up Memory Backend

Install and configure a vector store:

pip install faiss-cpu # or chromadb, weaviate-client

Use it to store embeddings and enable retrieval-based memory.

4. Define Tools

Create tools for your agent to use:

  • API wrappers (e.g., Jira, Slack)
  • Search engines (e.g., SerpAPI)
  • Code execution environments

LangChain makes it easy to register these tools.

5. Build Chains and Agents

Use LangChain’s LLMChain, Tool, and AgentExecutor to wire up logic:

🔄 Getting Started with LangGraph

from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.tools import Tool

llm = OpenAI()
tools = [Tool(name="Search", func=search_google, description="Search the web")]
agent = initialize_agent(tools, llm, agent_type="zero-shot-react-description")

1. Install LangGraph

pip install langgraph

2. Import and Define Your Graph

Use StateGraph to define nodes and edges:

from langgraph.graph import StateGraph

graph = StateGraph(MyState)
graph.add_node("plan", plan_step)
graph.add_node("act", tool_use_step)
graph.add_edge("plan", "act")
graph.add_edge("act", "plan")
agent_graph = graph.compile()

Each node can run:

  • LangChain chains
  • LangChain agents
  • Memory-backed retrieval steps

LangGraph lets you reuse your LangChain setup with added structure.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.