Retrieval-Augmented Generation, aka RAG — How does it work?
Last Updated on March 28, 2024 by Editorial Team
Author(s): Shahriar Hossain
Originally published on Towards AI.
In the context of Large Language Models (LLMs), RAG stands for Retrieval-Augmented Generation. RAG combines the power of retrieval systems with the generative capabilities of neural networks to enhance the performance of language models.
If you prefer to watch a video on RAG instead of reading, here is my YouTube video covering the topic of this article.
Consider the situation when you have several thousand documents of your organization. The documents can be documentation of policies, maintenance strategies, solutions to specific problems, diagnosis processes, or practically any text relevant to your problem space.
You want an LLM to answer questions based on the numerous documents you provided, not based on what the general-purpose LLM was instructed on.
There are two ways you can target this problem.
One is RAG, which is Retrieval-Augmented Generation, which is the topic of this article.The other one is fine-tuning, which we are not covering in this article. I have another article on fine-tuning. Here is the link to that article.GPT-2 is freely available, making it a cost-effective option for experimentation and learning. It offers a practical…
medium.com
Ok, let us discuss Retrieval-Augmented Generation or RAG. As the name suggests, it has three aspects.
Retrieval,Augmentation, andGeneration.Retrieval-Augmented Generation. The image is drawn using excalidraw.com.
When the… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI