Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Everything You Need to Know About Chunking for RAG
Latest   Machine Learning

Everything You Need to Know About Chunking for RAG

Last Updated on November 3, 2024 by Editorial Team

Author(s): Anushka sonawane

Originally published on Towards AI.

Everything You Need to Know About Chunking for RAG

Credits

Remember learning your first phone number? You probably broke it into smaller pieces β€” not because someone told you to, but because it was natural. That’s exactly what AI needs.

Working with large data often overwhelms Large Language Models (LLMs), causing them to generate inaccurate responses, known as β€œhallucinations.” I’ve seen this firsthand, where the right information is there, but the model can’t retrieve it correctly.

The core issue? Poor chunking. When you ask an AI a specific question and get a vague answer, it’s likely because the data wasn’t broken into manageable chunks. The fix is simple: chunking. Breaking text into smaller pieces allows the AI to focus on relevant data.

In this blog, I’ll explain chunking, why it’s crucial for Retrieval-Augmented Generation (RAG), and share strategies to make it work effectively.

chunking is all about breaking text into smaller, more manageable pieces to fit within the model’s context window. This is crucial for ensuring that the model can process and understand the information effectively.

Think of it like sending a friend a few quick texts instead of one long message much easier to read and respond to! Plus, who wants to wade through a giant wall of text, right?

Chunking is to RAG what memory is to human intelligence β€” get it wrong, and everything falls apart.β€” Andrew Ng

Why Should You Care About Chunking?

Let’s look at some real numbers that shocked me during my research:

  • A poorly chunked RAG system can miss up to 60% of relevant information (Stanford NLP Lab, 2023)
  • Optimal chunking can reduce hallucinations by 42% (OpenAI Research, 2023)
  • Processing time can vary by 300% based on chunking strategy alone

Research from companies like Google and Microsoft has shown that when text is broken into chunks, AI models show a 30–40% improvement in accuracy. For example, when processing legal documents, chunking by logical sections rather than arbitrary length improved response accuracy from 65% to 89% in controlled studies.

Why Does Context Length Matter?

Imagine trying to memorize a 1000-page novel in one sitting β€” overwhelming, right? That’s exactly what happens when we feed massive amounts of data to AI models. They stumble, getting facts mixed up or making things up entirely.

The GPT-4 model can handle an impressive 128,000 tokens. Here’s a handy breakdown:

  • Token Rule of Thumb: One token generally corresponds to about 4 characters of English text, translating to roughly ΒΎ of a word. For example, 100 tokens equate to about 75 words.
  • Imagine reading a long article online that’s 5,000 words. That translates to roughly 6,600 tokens. If an AI tried to process the entire article in one go, it might get lost in all the details. Instead, chunking it into sections β€” like paragraphs or key points β€” makes it much easier for the AI to focus on what’s essential.

Finding the right chunk size is crucial for retrieval accuracy. If chunks are too large, you risk hitting the model’s context window limit, which can lead to missed information. On the other hand, if they’re too small, you might strip away valuable context that could provide important insights.

Credits

Chunking in RAG Systems: How It Works

RAG begins by breaking large documents into smaller, more manageable chunks. These smaller parts make it easier for the system to retrieve and work with the information later on, avoiding data overload.

Once the document is chunked, each piece is transformed into a vector embedding β€” a numerical representation that captures the chunk’s underlying meaning. This step allows the system to efficiently search for relevant information based on your query.

When you ask a question, the system retrieves the most relevant chunks from the database using different methods.

The retrieved chunks are then passed to the generative model (such as GPT), which reads them and crafts a coherent response using the extracted information, making sure the final answer is contextually relevant.

Credits

Want to see these chunking strategies in action? Check out an interactive notebook for ready-to-use implementations!

RetrievalTutorials/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb at main ·…

Contribute to FullStackRetrieval-com/RetrievalTutorials development by creating an account on GitHub.

github.com

The Evolution of Chunking: A Journey Through Time

  1. Fixed-Size Chunking:
    Chunks are created based on a fixed token or character count

    β€’ Key Principles:
    Β· Small portions of text are shared between adjacent chunks(Overlap Windows)
    Β· Text is processed linearly from start to finish(Sequential Processing)

    β€’ When to Use:
    Β· Documents with uniform content distribution
    Β· Projects with strict memory constraints
    Β· Scenarios requiring predictable processing times
    Β· Basic proof-of-concept implementations

    β€’ Limitations to Consider
    Β· May split mid-sentence or mid-paragraph
    Β· Doesn’t account for natural document boundaries
    Β· Can separate related information

  2. Recursive chunking:
    Uses a hierarchical approach, splitting text based on multiple levels of separators:
    Β· First attempt: Split by major sections (e.g., chapters)
    Β· If chunks are too large: Split by subsections
    Β· If still too large: Split by paragraphs
    Β· Continue until desired chunk size is achieved

    β€’ Key Principle:
    Β· Starts with major divisions and works down(Hierarchical Processing)
    Β· Uses different separators at each level(Adaptive Splitting)
    Β· Aims for consistent chunk sizes while maintaining coherence(Size Targeting)

    β€’ When to Use
    Β· Well-structured documents (academic papers, technical documentation)
    Β· Content with clear hierarchical organization
    Β· Projects requiring balance between structure and size

    β€’ Strategic Considerations
    May produce variable chunk sizes

  3. Document-Specific Chunking:
    Different documents require different approaches β€” you wouldn’t slice a PDF the same way you’d slice a Markdown file.

    β€’ Format-Specific Strategies:
    Β· Markdown: Split on headers and lists
    Β· PDF: Handle text blocks and images separately
    Β· Code: Respect function and class boundaries
    Β· HTML: Split along semantic tags

    Studies from leading RAG implementations show that format-aware chunking can improve retrieval accuracy by up to 25% compared to basic chunking.

    β€’ Key Principles
    Β· Format Recognition: Adapts to document type (markdown, code, etc.)
    Β· Structural Awareness: Preserves format-specific elements
    Β· Semantic Boundaries: Respects format-specific content divisions

    β€’ When to Use
    Β· Mixed-format document collections
    Β· Technical documentation with code samples
    Β· Content with specialized formatting requirements

    β€’ Strategic Considerations
    Β· Requires format-specific parsing rules
    Β· More complex implementation
    Β· Better preservation of document structure

  4. Semantic Chunking:
    Semantic chunking uses embedding models to understand and preserve meaning:
    Β·Generate embeddings for sentences/paragraphs
    Β· Cluster similar embeddings
    Β· Form chunks based on semantic similarity
    Β· Sliding Window with Overlap:

    β€’ Key Principles:
    Β· Meaning Preservation: Groups semantically related content
    Β· Contextual Understanding: Uses embeddings to measure content similarity
    Β· Dynamic Sizing: Adjusts chunk size based on semantic coherence

    β€’ When to Use
    Β· Complex narrative documents
    Β· Content where context is crucial
    Β· Advanced retrieval systems

    β€’ Strategic Considerations
    Β· Computationally intensive
    Β· Requires embedding models

  5. Late Chunking:
    A revolutionary approach that embeds first, chunks later, It involves embedding an entire document first to preserve its contextual meaning, and only then splitting it into chunks for retrieval. This approach ensures that critical relationships between different parts of the text remain intact, which can otherwise be lost with traditional chunking methods.

    β€’ The Process:
    Β·Embed entire documents initially
    Β· Preserve global context and relationships
    Β· Create chunks while maintaining semantic links
    Β· Optimize for retrieval without losing context

    β€’ Key Principles:
    Β· Global Context Priority: Understands full document before splitting
    Β· Relationship Preservation: Maintains connections between related concepts
    Β· Adaptive Boundaries: Creates chunks based on semantic unit

    β€’ When to Use:
    Β· Long documents with complex cross-references
    Β· Technical documentation requiring context preservation
    Β· Legal documents where relationship accuracy is crucial
    Β· Research papers with interconnected concepts

    β€’ Strategic Considerations:
    Β· 15–20% higher initial processing overhead
    Β· Requires more upfront memory allocation
    Β· Shows 40% better performance in multi-hop queries (MIT CSAIL, 2023)
    Β· Reduces context fragmentation by 45% compared to traditional methods

  6. Agentic Chunking:
    This strategy mimics how humans naturally organize information, using AI to make intelligent chunking decisions.

    β€’ Key Principles
    Β· Cognitive Simulation: Mirrors human document processing
    Β· Context Awareness: Considers broader document context
    Β· Dynamic Assessment: Continuously evaluates chunk boundaries

    β€’ When to Use
    Β· Complex narrative documents
    Β· Content requiring human-like understanding
    Β· Projects where accuracy outweighs performance

    β€’ Strategic Considerations
    Β· Requires LLM integration
    Β· Higher computational cost
    Β·More sophisticated implementation

Before implementing any chunking strategy, consider:

  1. Document Characteristics
    Structure level (structured vs. unstructured)
    Format complexity
    Cross-reference density
  2. Project Requirements
    Accuracy needs
    Processing speed requirements
    Resource constraints
  3. System Context
    Integration requirements
    Scalability needs
    Maintenance capabilities

Chunking isn’t just about splitting text β€” it’s about preserving meaning. As one researcher put it:

Chunking is the art of breaking without breaking understanding.

But remember: The best chunking strategy is the one that works for your specific use case. Start with these principles, measure everything, and iterate based on real results.

Before you go!

If you found value in this article, I would truly appreciate your support!
You can β€˜like’ this LinkedIn post, where you’ll also find a free friend link to this article.

give the article a few claps on Medium (you can clap up to 50 times!) β€” it really helps get this piece in front of more people

Also, don’t forget to follow me on Medium and LinkedIn, and Subscribe to stay in the loop with my latest posts!

Until Next Time,
Anushka

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓