Hands-On LangChain for LLM Applications Development: Documents Splitting [Part 1]
Last Updated on December 30, 2023 by Editorial Team
Author(s): Youssef Hosni
Originally published on Towards AI.
Once youβve loaded documents, youβll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your modelβs context window.
When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together.
LangChain has several built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. In this two-part practical article, we will explore the importance of document splitting, and the available LangChain text splitters and will explore four of them in depth.
Why do we need document splitting?Different types of LangChain splittersIntroduction to recursive character text splitter & the character text splitterDiving deep in recursive splittingPDF loading & splitting [Covered in part 2 ]Token splitting [Covered in part 2 ]Context-aware splitting [Covered in part 2 ]
Most insights I share in Medium have previously been shared in my weekly newsletter, To Data & Beyond.
If you want to be up-to-date with the frenetic world of AI while also feeling… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI