What is RAG?
Author(s): Louis Bouchard
Originally published on Towards AI.
Retrieval augmented generation simply explained
Originally published on louisbouchard.ai, read it 2 days before on my blog!
When using ChatGPT, you most probably have encountered responses like, βIβm sorry, but as of my last knowledge update in January 2022,β or even responses that are not true at all. This is where RAG comes in, and says, let me help by injecting more knowledge (or content) into your interactions with an LLM and help it answer the unknown and upcoming queries.
We hear LLMs, prompts, and RAG everywhere. By now, I think most of us know what an LLM and a prompt is. But did you know that right now, RAG is just as important as both of these and powers most applications you may use involving a chatbot? I recently did a poll in our Learn AI Together Discord community to find out if people had already studied, created or used RAG applications, and most voted to understand what RAG is used for. RAG is as important as your coursebook for success in a class, so understanding what it is is highly relevant in AI.
An LLM, or a large language model, is just an AI model trained on language to talk with humans, like GPT-4 used in… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI