Everything You Need to Know About Chunking for RAG
Last Updated on November 3, 2024 by Editorial Team
Author(s): Anushka sonawane
Originally published on Towards AI.
Everything You Need to Know About Chunking for RAG
Remember learning your first phone number? You probably broke it into smaller pieces β not because someone told you to, but because it was natural. Thatβs exactly what AI needs.
Working with large data often overwhelms Large Language Models (LLMs), causing them to generate inaccurate responses, known as βhallucinations.β Iβve seen this firsthand, where the right information is there, but the model canβt retrieve it correctly.
The core issue? Poor chunking. When you ask an AI a specific question and get a vague answer, itβs likely because the data wasnβt broken into manageable chunks. The fix is simple: chunking. Breaking text into smaller pieces allows the AI to focus on relevant data.
In this blog, Iβll explain chunking, why itβs crucial for Retrieval-Augmented Generation (RAG), and share strategies to make it work effectively.
chunking is all about breaking text into smaller, more manageable pieces to fit within the modelβs context window. This is crucial for ensuring that the model can process and understand the information effectively.
Think of it like sending a friend a few quick texts instead of one long message much easier to read and respond to! Plus, who wants to wade through a giant wall of text, right?
Chunking is to RAG what memory is to human intelligence β get it wrong, and everything falls apart.β Andrew Ng
Why Should You Care About Chunking?
Letβs look at some real numbers that shocked me during my research:
- A poorly chunked RAG system can miss up to 60% of relevant information (Stanford NLP Lab, 2023)
- Optimal chunking can reduce hallucinations by 42% (OpenAI Research, 2023)
- Processing time can vary by 300% based on chunking strategy alone
Research from companies like Google and Microsoft has shown that when text is broken into chunks, AI models show a 30β40% improvement in accuracy. For example, when processing legal documents, chunking by logical sections rather than arbitrary length improved response accuracy from 65% to 89% in controlled studies.
Why Does Context Length Matter?
Imagine trying to memorize a 1000-page novel in one sitting β overwhelming, right? Thatβs exactly what happens when we feed massive amounts of data to AI models. They stumble, getting facts mixed up or making things up entirely.
The GPT-4 model can handle an impressive 128,000 tokens. Hereβs a handy breakdown:
- Token Rule of Thumb: One token generally corresponds to about 4 characters of English text, translating to roughly ΒΎ of a word. For example, 100 tokens equate to about 75 words.
- Imagine reading a long article online thatβs 5,000 words. That translates to roughly 6,600 tokens. If an AI tried to process the entire article in one go, it might get lost in all the details. Instead, chunking it into sections β like paragraphs or key points β makes it much easier for the AI to focus on whatβs essential.
Finding the right chunk size is crucial for retrieval accuracy. If chunks are too large, you risk hitting the modelβs context window limit, which can lead to missed information. On the other hand, if theyβre too small, you might strip away valuable context that could provide important insights.
Chunking in RAG Systems: How It Works
RAG begins by breaking large documents into smaller, more manageable chunks. These smaller parts make it easier for the system to retrieve and work with the information later on, avoiding data overload.
Once the document is chunked, each piece is transformed into a vector embedding β a numerical representation that captures the chunkβs underlying meaning. This step allows the system to efficiently search for relevant information based on your query.
When you ask a question, the system retrieves the most relevant chunks from the database using different methods.
The retrieved chunks are then passed to the generative model (such as GPT), which reads them and crafts a coherent response using the extracted information, making sure the final answer is contextually relevant.
Want to see these chunking strategies in action? Check out an interactive notebook for ready-to-use implementations!
RetrievalTutorials/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb at main Β·β¦
Contribute to FullStackRetrieval-com/RetrievalTutorials development by creating an account on GitHub.
github.com
The Evolution of Chunking: A Journey Through Time
- Fixed-Size Chunking:
Chunks are created based on a fixed token or character count β’ Key Principles:
Β· Small portions of text are shared between adjacent chunks(Overlap Windows)
Β· Text is processed linearly from start to finish(Sequential Processing) β’ When to Use:
Β· Documents with uniform content distribution
Β· Projects with strict memory constraints
Β· Scenarios requiring predictable processing times
Β· Basic proof-of-concept implementations β’ Limitations to Consider
Β· May split mid-sentence or mid-paragraph
Β· Doesnβt account for natural document boundaries
Β· Can separate related information - Recursive chunking:
Uses a hierarchical approach, splitting text based on multiple levels of separators:
Β· First attempt: Split by major sections (e.g., chapters)
Β· If chunks are too large: Split by subsections
Β· If still too large: Split by paragraphs
Β· Continue until desired chunk size is achieved β’ Key Principle:
Β· Starts with major divisions and works down(Hierarchical Processing)
Β· Uses different separators at each level(Adaptive Splitting)
Β· Aims for consistent chunk sizes while maintaining coherence(Size Targeting) β’ When to Use
Β· Well-structured documents (academic papers, technical documentation)
Β· Content with clear hierarchical organization
Β· Projects requiring balance between structure and size β’ Strategic Considerations
May produce variable chunk sizes - Document-Specific Chunking:
Different documents require different approaches β you wouldnβt slice a PDF the same way youβd slice a Markdown file.β’ Format-Specific Strategies:
Β· Markdown: Split on headers and lists
Β· PDF: Handle text blocks and images separately
Β· Code: Respect function and class boundaries
Β· HTML: Split along semantic tagsStudies from leading RAG implementations show that format-aware chunking can improve retrieval accuracy by up to 25% compared to basic chunking.
β’ Key Principles
Β· Format Recognition: Adapts to document type (markdown, code, etc.)
Β· Structural Awareness: Preserves format-specific elements
Β· Semantic Boundaries: Respects format-specific content divisionsβ’ When to Use
Β· Mixed-format document collections
Β· Technical documentation with code samples
Β· Content with specialized formatting requirementsβ’ Strategic Considerations
Β· Requires format-specific parsing rules
Β· More complex implementation
Β· Better preservation of document structure - Semantic Chunking:
Semantic chunking uses embedding models to understand and preserve meaning:
Β·Generate embeddings for sentences/paragraphs
Β· Cluster similar embeddings
Β· Form chunks based on semantic similarity
Β· Sliding Window with Overlap: β’ Key Principles:
Β· Meaning Preservation: Groups semantically related content
Β· Contextual Understanding: Uses embeddings to measure content similarity
Β· Dynamic Sizing: Adjusts chunk size based on semantic coherenceβ’ When to Use
Β· Complex narrative documents
Β· Content where context is crucial
Β· Advanced retrieval systemsβ’ Strategic Considerations
Β· Computationally intensive
Β· Requires embedding models - Late Chunking:
A revolutionary approach that embeds first, chunks later, It involves embedding an entire document first to preserve its contextual meaning, and only then splitting it into chunks for retrieval. This approach ensures that critical relationships between different parts of the text remain intact, which can otherwise be lost with traditional chunking methods.β’ The Process:
Β·Embed entire documents initially
Β· Preserve global context and relationships
Β· Create chunks while maintaining semantic links
Β· Optimize for retrieval without losing contextβ’ Key Principles:
Β· Global Context Priority: Understands full document before splitting
Β· Relationship Preservation: Maintains connections between related concepts
Β· Adaptive Boundaries: Creates chunks based on semantic unitβ’ When to Use:
Β· Long documents with complex cross-references
Β· Technical documentation requiring context preservation
Β· Legal documents where relationship accuracy is crucial
Β· Research papers with interconnected conceptsβ’ Strategic Considerations:
Β· 15β20% higher initial processing overhead
Β· Requires more upfront memory allocation
Β· Shows 40% better performance in multi-hop queries (MIT CSAIL, 2023)
Β· Reduces context fragmentation by 45% compared to traditional methods - Agentic Chunking:
This strategy mimics how humans naturally organize information, using AI to make intelligent chunking decisions.β’ Key Principles
Β· Cognitive Simulation: Mirrors human document processing
Β· Context Awareness: Considers broader document context
Β· Dynamic Assessment: Continuously evaluates chunk boundariesβ’ When to Use
Β· Complex narrative documents
Β· Content requiring human-like understanding
Β· Projects where accuracy outweighs performanceβ’ Strategic Considerations
Β· Requires LLM integration
Β· Higher computational cost
Β·More sophisticated implementation
Before implementing any chunking strategy, consider:
- Document Characteristics
Structure level (structured vs. unstructured)
Format complexity
Cross-reference density - Project Requirements
Accuracy needs
Processing speed requirements
Resource constraints - System Context
Integration requirements
Scalability needs
Maintenance capabilities
Chunking isnβt just about splitting text β itβs about preserving meaning. As one researcher put it:
Chunking is the art of breaking without breaking understanding.
But remember: The best chunking strategy is the one that works for your specific use case. Start with these principles, measure everything, and iterate based on real results.
Before you go!
If you found value in this article, I would truly appreciate your support!
You can βlikeβ this LinkedIn post, where youβll also find a free friend link to this article.
give the article a few claps on Medium (you can clap up to 50 times!) β it really helps get this piece in front of more people
Also, donβt forget to follow me on Medium and LinkedIn, and Subscribe to stay in the loop with my latest posts!
Until Next Time,
Anushka
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI