Evaluating LLM Summaries using Embedding Distance with LangSmith.
Author(s): Pere Martra
Originally published on Towards AI.
LangSmith is the new tool from LangChain for tracing and evaluating models. In this article, we will explore how to use it to assist in assessing the quality of summaries produced by two open-source models.
This article is part of a free course about Large Language Models available on GitHub.
The metrics that we have been using up to now with more traditional Machine Learning Models and tasks, such as Accuracy, F1 Score or Recall, do not help us to evaluate the results of generative models. Itβs quite challenging to assess the quality of generated text because there isnβt a single metric capable of telling us if the outcome meets expectations.
The variables for evaluating a text are multiple: Does it adhere to the truth? Does it exhibit any bias? Is the tone used appropriate? These are just a few aspects to analyze.
There are also other metrics, such as BLEU or ROUGE, that allow for evaluating the quality of a text regarding a specific task. BLEU is typically used for measuring the quality of translations, and ROUGE for measuring the quality of summaries.
However, even these metrics are becoming relatively outdated. In this article, I will use Langsmith, the new tool from LangChain, specifically designed for tracing and evaluating the results of Large Language Models (LLMs), to obtain the embedding distance between the summaries generated and a reference summary.
To calculate the distance between the embeddings, Langsmith uses the… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI