Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
LLM & AI Agent Applications with LangChain and LangGraph — Part 10- Chains and LCEL
Latest   Machine Learning

LLM & AI Agent Applications with LangChain and LangGraph — Part 10- Chains and LCEL

Last Updated on January 2, 2026 by Editorial Team

Author(s): Michalzarnecki

Originally published on Towards AI.

LLM & AI Agent Applications with LangChain and LangGraph — Part 10- Chains and LCEL

Designing clear data flows in LangChain

Welcome back to another module related to LLM-driven applications development.

In the previous parts we introduced the idea of chains — sequences of steps that connect prompts, models, parsers and any helper tools into one coherent flow. Instead of a tangled piece of code full of nested calls and ifs, you get a clearly defined pipeline: prepare the prompt, call the model, post-process the result, store or return the answer.

In this episode we’ll look at how chains evolved in LangChain and how the modern approach with LCEL — LangChain Expression Language makes these flows easier to write, read and maintain.

From “classic chains” to LCEL

In early versions of LangChain, chains were implemented as ready-made classes. You might remember names like LLMChain or SimpleSequentialChain. They gave you a nice high-level abstraction: you configured a model and a prompt template, wrapped them in a chain, and then just called .run().

This style still exists today under the langchain_classic layer, and it’s perfectly fine if you’re maintaining older projects or following older tutorials. But as the ecosystem grew and use cases became more complex, the community needed something more flexible.

That “something” is LCEL — LangChain Expression Language.

With LCEL, you express chains more like data pipelines than special classes. Instead of constructing nested objects, you describe a flow using operators, for example:

prompt | model | parser

This tiny line already encodes an entire chain: take input, format it with the prompt, send it to the model, parse the output. Under the hood you still have the same concepts, but they are now composed in a way that looks very close to how the data actually moves.

The result is that building chains feels much more like building a streaming pipeline or a series of data transformations than like instantiating opaque objects.

Different shapes of chains

Chains don’t all have to look the same. Once you start using LCEL, you can design flows with different shapes depending on the task.

The simplest case is a linear chain: prompt → model → parser → final result. This is what you’ll use for many classic “send question, get answer” style interactions.

Sometimes you’ll need a sequential chain, where the output of the first model becomes the input to a second one. For example, the first step might extract structured data from a document, and the second step might generate a human-friendly explanation based on that structure.

In more advanced scenarios you can create branching chains. Here one step produces a result that is then split into several paths, each processed independently in its own sub-chain. Later you might combine these partial results back into one. This pattern is useful when, for instance, you want to run several analyses in parallel on the same input and then merge their insights.

The important point is that “chain” in LangChain is not limited to a single straight line. It’s a general way of describing how information flows between components: sometimes in a simple sequence, sometimes with branches and merges.

Why LCEL-based chains are so useful

LCEL brings a few practical advantages that matter a lot once your codebase grows beyond small experiments.

The first is modularity. Each step in the chain — prompt, model, parser, custom function — can be tested, replaced or upgraded on its own. If you decide to change the model provider or tweak the parsing logic, you do not have to rewrite the entire flow.

The second is readability. When you look at an LCEL chain, you can usually tell at a glance how the data moves through your system. This is very different from a deeply nested, imperative script where logic and data flow are mixed together.

The third is reusability. A chain is just another component. You can define it once, export it, and then reuse it in other parts of your project or even in completely different projects. This is especially handy when you standardize patterns for tasks like summarization, extraction or RAG.

Finally, there is debuggability. When something breaks or the model behaves strangely, it’s much easier to locate the problem if your flow is described as a sequence of clear steps. You can inspect intermediate outputs, log them, or plug in tracing tools like LangSmith without tearing everything apart.

Taken together, these properties make chains a core concept in LangChain. They are the bridge between “one-off calls to a model” and “structured applications built step by step”.

Practical examples of chains and LCEL

Chains are the backbone of most LangChain projects. They let you move from simple, ad-hoc prompts to well-structured, scalable pipelines. With LCEL, defining those pipelines becomes both cleaner and more expressive.

In the next part we’ll switch to a Jupyter notebook and go through concrete examples:

  • a basic linear chain,
  • a sequential chain that combines multiple model calls,
  • small branching flow to show how parallel paths can work in practice.

You’ll be able to run each example yourself, modify the components and see immediately how changes in the chain affect the overall behaviour of your application.

Install and import libraries

Install libraries and load them and also environment variables using code below.

!pip install -q langchain langchain-openai python-dotenv
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv

load_dotenv()

Simple chain: prompt → model → result

Let’s use simple chain prompt → model → result to get code for provided topic — in this case ”AGI” (of course current models are not able to invent general artificial intelligence on their own, at least for now;)).

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv

load_dotenv()

prompt = ChatPromptTemplate.from_messages([
("system", "You are a Python programming expert and an AI genius."),
("user", "Write a code related to {topic}")
])

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

chain = prompt | llm | StrOutputParser()
result = chain.invoke({'topic': 'AGI'})

print(result)

output:

Creating a code snippet related to Artificial General Intelligence (AGI) is a complex task, as AGI refers to a type of AI that can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. However, I can provide a simplified example that demonstrates some foundational concepts that could be part of an AGI system, such as learning from experience and making decisions based on that learning.

Below is a Python code snippet that simulates a basic reinforcement learning agent. This agent learns to navigate a simple grid environment to reach a goal. While this is far from AGI, it illustrates some principles of learning and decision-making.

```python
import numpy as np
import random

class GridEnvironment:
def __init__(self, size, goal):
self.size = size
self.goal = goal
self.state = (0, 0) # Start at the top-left corner

def reset(self):
self.state = (0, 0)
return self.state

def step(self, action):
x, y = self.state
if action == 'up' and x > 0:
x -= 1
elif action == 'down' and x < self.size - 1:
x += 1
elif action == 'left' and y > 0:
y -= 1
elif action == 'right' and y < self.size - 1:
y += 1

self.state = (x, y)
reward = 1 if self.state == self.goal else -0.1
done = self.state == self.goal
return self.state, reward, done

class QLearningAgent:
def __init__(self, actions, learning_rate=0.1, discount_factor=0.9):
self.q_table = {}
self.actions = actions
self.learning_rate = learning_rate
self.discount_factor = discount_factor

def get_q_value(self, state, action):
return self.q_table.get((state, action), 0.0)

def choose_action(self, state, epsilon):
if random.random() < epsilon:
return random.choice(self.actions) # Explore
else:
q_values = [self.get_q_value(state, a) for a in self.actions]
max_q = max(q_values)
return self.actions[q_values.index(max_q)] # Exploit

def update_q_value(self, state, action, reward, next_state):
best_next_q = max(self.get_q_value(next_state, a) for a in self.actions)
current_q = self.get_q_value(state, action)
new_q = current_q + self.learning_rate * (reward + self.discount_factor * best_next_q - current_q)
self.q_table[(state, action)] = new_q

def main():
env = GridEnvironment(size=5, goal=(4, 4))
agent = QLearningAgent(actions=['up', 'down', 'left', 'right'])
episodes = 1000
epsilon = 0.1

for episode in range(episodes):
state = env.reset()
done = False

while not done:
action = agent.choose_action(state, epsilon)
next_state, reward, done = env.step(action)
agent.update_q_value(state, action, reward, next_state)
state = next_state

print("Training complete. Q-values:")
for key, value in agent.q_table.items():
print(f"State: {key[0]}, Action: {key[1]}, Q-value: {value:.2f}")

if __name__ == "__main__":
main()
```

### Explanation:
1. **GridEnvironment**: This class represents a simple grid where the agent can move. The goal is to reach a specific cell in the grid.
2. **QLearningAgent**: This class implements a Q-learning agent that learns to navigate the grid by updating its Q-values based on the rewards it receives.
3. **Main Function**: The main function initializes the environment and agent, runs multiple episodes of training, and updates the Q-values based on the agent's actions.

### Note:
This code is a basic example of reinforcement learning and does not represent AGI. AGI would require much more complexity, including advanced reasoning, understanding, and the ability to transfer knowledge across different domains.

Sequential chain: 2 models in single sequence

In second example we are using chain that consist of 2 chains:
1. prompt for summary -> LLM -> string output parser
2. prompt for translation -> LLM -> string output parser

final chain:
summary chain -> pass result connector -> translation chain

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# 1) Chain: summary (input: {text} → output: str)
summary_chain = (
ChatPromptTemplate.from_messages([
("system", "Summarize the text below in 1–2 sentences."),
("user", "{text}")
])
| llm
| StrOutputParser()
)

# 2) Adapter: replace str → {"text": str} and pass to next prompt
to_dict = RunnableLambda(lambda s: {"text": s})

# 3) Chain: translate summary to french (input: {text} → output: str)
translate_chain = (
ChatPromptTemplate.from_messages([
("system", "Translate the text into French."),
("user", "{text}")
])
| llm
| StrOutputParser()
)

sequential_chain = summary_chain | to_dict | translate_chain
input_text = "LangChain enables the creation of AI applications by combining models, prompts, and tools into coherent pipelines."

final_translation = sequential_chain.invoke({"text": input_text})

print(final_translation)

output:

LangChain facilite le développement d'applications d'IA en intégrant des modèles, des invites et des outils dans des flux de travail structurés.

Branching chain: one response, two processing

In the last example we will run 2 models parallel to handle 2 separate tasks in same time. For this purpose we need to use RunnableParallel component.

from langchain_core.runnables import RunnableParallel

# summary prompt
prompt_summary = ChatPromptTemplate.from_template("Summarize: {text}")

# sentiment prompt
prompt_sentiment = ChatPromptTemplate.from_template("Classify sentiment: {text}")

branch_chain = RunnableParallel(
summary=(prompt_summary | llm | StrOutputParser()),
sentiment=(prompt_sentiment | llm | StrOutputParser())
)

text = "I am very happy with this course, I learned a lot about LangChain! Now I know that LangChain enables the creation of AI applications by combining models, prompts, and tools into coherent pipelines."
result = branch_chain.invoke({"text": text})

print(result)

output:

{'summary': 'The course was highly beneficial, providing valuable insights into LangChain, which facilitates the development of AI applications by integrating models, prompts, and tools into cohesive pipelines.', 'sentiment': 'The sentiment of the statement is positive. The speaker expresses happiness and satisfaction with the course and indicates that they have gained valuable knowledge.'}

That’s all in this part. Hope you gained intuition about nature of LangChain chains and you are able also to create them using LCEL syntax.

In next chapter we will discover another great feature of LLM-based applications — tools.

see next chapter

see previous chapter

see the full code from this article in the GitHub repository

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.