Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
LLM & AI Agent Applications with LangChain and LangGraph — Part 13: Multimodal Models
Latest   Machine Learning

LLM & AI Agent Applications with LangChain and LangGraph — Part 13: Multimodal Models

Last Updated on January 2, 2026 by Editorial Team

Author(s): Michalzarnecki

Originally published on Towards AI.

LLM & AI Agent Applications with LangChain and LangGraph — Part 13: Multimodal Models

Hi! This time we’ll tackle a topic that has become massively important in the recent time: multimodal models.

A lot of “classic” language models — like GPT-3 or early versions of LLaMA — work only with text. They can write, translate, summarize, and even generate code. But the world we live in isn’t single-channel. In real life we constantly operate on many types of signals: images, audio, video, and various sensor-like inputs.

What does “multimodal” actually mean?

Multimodal models are models that can understand and process multiple types of inputs at the same time — for example:

  • text + image
  • text + audio
  • video + structured data

This is a big step forward, because it unlocks use cases that are hard (or very expensive) to build with “separate” systems.

Why it matters: what it enables

Once a model can connect different modalities, we get completely new categories of applications, for example:

  • medical image analysis combined with natural-language descriptions and explanations,
  • driver assistance systems and autonomous vehicles,
  • infrastructure monitoring, where the model interprets satellite images together with reports,
  • speech and sound recognition combined with real context understanding (not only transcription).

And an important detail: multimodal models don’t “replace” classic computer vision or audio algorithms — they complement them. The real value is that they can describe, interpret, and connect data coming from different sources into one coherent understanding.

Two practical examples we’ll build

In this episode I’ll show you two hands-on demos:

  1. Controlling a prototype vehicle from a camera frame
    The model receives an image and must decide whether the vehicle should go straight, left, or right.
  2. Counting people in a photo
    A classic object-recognition style task — simple on paper, but very useful as a building block in real systems.

The cool part is that with LangChain + multimodal models, you can prototype these kinds of systems in just a few lines of code.

Alright — let’s jump into the examples.

Install and import libraries

!pip install -q openai python-dotenv
import base64
from dotenv import load_dotenv

load_dotenv()

Encode images to base64

Here we create helper function that helps encoding images into base64 format. This allows to attach images as a part of prompt sent to LLM API.

def encode_image(path):
with open(path, "rb") as f:
return base64.b64encode(f.read()).decode("utf-8")

Image-based vehicle control

Here is prototype of autonomous vehicle that I built for my experiments with convolutional neural networks. It’s controlled by Raspeberry Pi 4B which takes image from front camera and decides what action needs to be taken. Then picks and runs engines to make a move.

The goal is to navigate through a lane like one below. On every step one of 3 actions is triggered:

  • move forward
  • turn left
  • turn right

Also vehicle moves by 30cm at each step.

Here is a code that uses OpenAI API and combines text prompt with an encoded image.

resp = client.responses.create(
model="gpt-4o-mini",
input=[{
"role": "user",
"content": [
{"type": "input_text",
"text": ("This is image from front camera of autonomous vehicle prototype that drives through lane."
"Tell me if next step of a car should be moving forward, turn left or turn right? "
"Return only single word: forward, left, or right. Car moves only 30cm in each step")},
{"type": "input_image",
"image_url": f"data:image/jpeg;base64,{image_b64}"}
],
}],
)

decision = resp.output_text.strip().lower()

print("Model decision:", decision)

output:

Model decision: forward

It gives correct answers for this simplified autonomous drive scenario.

Count people on the image

In the next example lets use multimodal GPT-4o to count people on image from airport.

The code is similar to previous example:

image_b64 = encode_image("../../data/img/airport_simple.jpg")

messages = [{
"role": "user",
"content": [
{"type": "input_text", "text": "Count people on image. Return only the number. Count also people hidden behind other people or visible only partially on the image. Give precise number."},
{"type": "input_image", "image_url": f"data:image/jpeg;base64,{image_b64}"},
],
}]

response = client.responses.create(model="gpt-4o-mini", input=messages)
print("Number of people:", response.output_text)

output:

Number of people: 6

On this image we have 7 silhouettes, but model sees only 6. Also this image is relatively simple to recognize as it doesn’t contain people behind other people or ones partially out of image frame. The newer models like GPT-5-mini perform better in this scenario, but we have to always double check the quality of results when using LLMs for image analysis tasks.

That’s all in this chapter dedicated to multimodal models.
In next chapter we will focus on principles for high quality prompt engineering which becomes a “programming language for LLMs”.

see next chapter

see previous chapter

see the full code from this article in the GitHub repository

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.