LLM & AI Agent Applications with LangChain and LangGraph — Part 12: Reasoning, ReAct, and Agents
Author(s): Michalzarnecki Originally published on Towards AI. In this chapter we will zoom in on “reasoning” in language models. My goal is that after this article it’s clear: which models actually plan and infer better, how that differs from the ReAct approach …
LLM & AI Agent Applications with LangChain and LangGraph — Part 13: Multimodal Models
Author(s): Michalzarnecki Originally published on Towards AI. Hi! This time we’ll tackle a topic that has become massively important in the recent time: multimodal models. A lot of “classic” language models — like GPT-3 or early versions of LLaMA — work only …
LLM & AI Agent Applications with LangChain and LangGraph — Part 11: Tools
Author(s): Michalzarnecki Originally published on Towards AI. Welcome to the next part of the course dedicated to LLM-driven applications development. In this episode we’ll cover another key building block in the LangChain ecosystem: tools. Language models are incredibly powerful on their own, …
LLM & AI Agent Applications with LangChain and LangGraph — Part 10- Chains and LCEL
Author(s): Michalzarnecki Originally published on Towards AI. Designing clear data flows in LangChain Welcome back to another module related to LLM-driven applications development. In the previous parts we introduced the idea of chains — sequences of steps that connect prompts, models, parsers …
LLM & AI Agent Applications with LangChain and LangGraph — Part 9 — Conversation Memory
Author(s): Michalzarnecki Originally published on Towards AI. Welcome back to another article focused on the LLM-driven applications development. In this part of the course we’ll look at memory in LangChain — in other words, how to make sure that the assistant you’re …
LLM & AI Agent Applications with LangChain and LangGraph — Part 8 — Temperature, Top-p, Top-k and Max Tokens: How to Shape Model Behavior
Author(s): Michalzarnecki Originally published on Towards AI. Welcome back to another article focused on the LLM-driven applications development. In this part of the course I want to focus on something very practical: the main generation parameters you can control when working with …
LLM & AI Agent Applications with LangChain and LangGraph — Part 21: Vector Database and Embeddings
Author(s): Michalzarnecki Originally published on Towards AI. Hi! In this chapter I’ll explain what is the purpose of using vector databases in LLM-based applications and why embeddings are so important in natural language processing. There are multiple database engines that support data …
LLM & AI Agent Applications with LangChain and LangGraph — Part 4 — Components of GPT
Author(s): Michalzarnecki Originally published on Towards AI. Transformers, embeddings and attention: how modern LLMs really think Welcome back in the series related to LLM-based application development. By now you already know the basics of how LLMs are built and what their key …
LLM & AI Agent Applications with LangChain and LangGraph — Part 3: Model capacity, context windows, and what actually makes an LLM “large”
Author(s): Michalzarnecki Originally published on Towards AI. Welcome in next chapter in the series about LLMs-based application development. To this point we already have some basic intuition about how large language models work. Now I want to go one level deeper and …
LLM & AI Agent Applications with LangChain and LangGraph — Part 2: What is a machine learning model and what makes LLMs special?
Author(s): Michalzarnecki Originally published on Towards AI. Welcome in next chapter in the series about LLMs-based application development. In this part I want to clarify two things that appear constantly in any discussion about AI: what a machine learning model actually is, …
LLM & AI Agent Applications with LangChain and LangGraph — Part 1: How LLMs become so important in modern app development
Author(s): Michalzarnecki Originally published on Towards AI. Welcome to the first part of this series. In this part I want to take a step back from LangChain, LangGraph and coding, and focus on the foundations. We will look at the main ideas …
Building an AI Debate Panel: Agents that Argue and Give a Final Conclusion
Author(s): Michalzarnecki Originally published on Towards AI. Building an AI Debate Panel: Agents that Argue and Give a Final Conclusion A single LLM prompt or a plain ReAct (reasoning & take actions) agent often gives you a plausible answer – sometimes great, …
Building Financial Reports With FinGPT and Claude 3.7 Sonnet
Author(s): Michalzarnecki Originally published on Towards AI. Financial report generated for GameStop Corporation using approach end-to-end approach described in this article Introduction Large Language Models (LLMs) are powerful tools for generating summaries and analyses of source documents. Modern LLMs can even grasp …
Evaluating LLM and AI agents Outputs with String Comparison, Criteria & Trajectory Approaches
Author(s): Michalzarnecki Originally published on Towards AI. When your model’s answers sound convincing, how do you prove they’re actually good? This article walks through three complementary evaluation strategies — string comparison, criteria-based scoring, and trajectory analysis. 1. String-Comparison Metrics Consider question below: …
AI Codebase Expert Agent: Support Projects Development Tasks With an LLM Multi Agent Powered Approach
Author(s): Michalzarnecki Originally published on Towards AI. In the ever-evolving landscape of software development, managing large code bases and efficiently resolving issues remains a significant challenge. In this article I describe AI Codebase Expert application, a tool that leverages the power of …