How to Optimize Chunk Size for RAG in Production?
Last Updated on May 14, 2024 by Editorial Team
Author(s): Mandar Karhade, MD. PhD.
Originally published on Towards AI.
The chunk size can make or break the retrieval. Here is how to determine the best chunk size for your use case.
Today, we will examine chunk-size optimization during the development of an RAG application. We will assume that it is a business-specific use case. We will also observe how and where generic approaches to chunk-size finding can fail or excel.
Let me ramble it a little bit! This part is important so that the decisions are apparent in the later part of the article.
Assume that you are working at your company. The company has a bunch of historical documents that are organized somewhere in SharePoint. Your company has finally decided to invest in Generative AI. Now, you are tasked with creating an application that can find relevant information in the form of answers and provide it to your 200 employees. Letβs chunk your task into smaller issues.
A store of documents (let's say you have 10000 documents)A way to retrieve informationA way to generate answerUI/UX to deliver answers back to your team/users
We will focus on only the store of documents and a way to retrieve information. Two critical issues in the production system are fault tolerance and Scalability/Latency β
There are two probabilities that we should be worried about. P1 is the probability of making a mistake, and P2 is the probability of harm… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI