MLFlow Series 01: RAG Evaluation with MLFlow
Last Updated on November 2, 2024 by Editorial Team
Author(s): Ashish Abraham
Originally published on Towards AI.
A definitive guide to evaluating RAG using MLFlow
This member-only story is on us. Upgrade to access all of Medium.
Image By Author (Generated By AI)For a long time, I have been thinking about writing a series of articles about any tool that I found to be super useful in my AI development career. Today, Iβm pulling back the curtain on one such indispensable super tool: MLFlow. So this time, let me talk about some really good use cases that I had with this framework. Welcome to PART-01 of the series!
Retrieval Augmented Generation (RAG) has been a popular approach for expanding the knowledge base of LLMs. When it comes to production, the performance and reliability of the system are crucial, and otherwise, they will have no practical value for the end users. In order to ensure this is as good as expected, we need powerful evaluation pipelines in production. MLFlow offers one of the best and complete ways to do this.
In this article, we will explore in detail, how to evaluate RAG systems for production using MLFlow.
Β· PrerequisitesΒ· Setup RAG Workflow β Database Setup β RetrieverΒ· Evaluation β Define Evaluation Metrics β evaluate()Β· Wrap upΒ· References & Resources
I am currently using the library requirements are listed below.
pandas: 2.2.2datasets: 2.21.0langchain:… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI