Evaluate and Monitor the Experiments With Your LLM App
Last Updated on August 1, 2023 by Editorial Team
Author(s): Konstantin Rink
Originally published on Towards AI.
Evaluation and tracking of your LLM experiments with TruLens
This member-only story is on us. Upgrade to access all of Medium.
Photo by Jonathan Diemel on Unsplash
The development of a Large Language Model application involves many iterations of experimentation. As a developer, your objective is to ensure that the modelβs answers align with your specific requirements like informativeness and appropriateness. This process of retesting and evaluation can be quite time-consuming.
This article will show you step-by-step how to automate such a process using TruLens. TruLens is a Python package that contains a set of tools for evaluating your LLM applications.
A colab notebook containing all the example code can be found U+1F449here.
TruLens… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI