LLMs Can Self-Reflect
Last Updated on June 3, 2024 by Editorial Team
Author(s): Vatsal Saglani
Originally published on Towards AI.
Exploring how we can evaluate LLM responses with LLMs
Image generated by ChatGPT
When working with LLMs, weβre often confused about the quality of output the LLM has generated. This is the case when we donβt have any LLM Grounding involved.
The output generated by the LLM is linked with real-world information, enabling more accurate responses. This linking is termed LLM Grounding.
For highly specific use cases we can provide the LLM access to our private data repository to get better responses. Thus for a specific query, the LLM can retrieve the relevant chunks and documents from the private data repository, use those documents in its context, and generate a response answering the query.
Grounding helps in reducing hallucinations and builds a bridge between LLM's language skills and reasoning abilities with a private corpus of data that is not part of the LLMβs weights. We mightβve already seen a lot of blogs or YouTube videos about building RAGs. RAG itself involves grounding LLMs on custom data or private data that is not openly available on the internet.
We might be thinking that if grounding improves LLM output quality then why are we considering self-reflection?
Grounding only works when the LLM is provided with a set of tools or data to refer to. When using LLMs… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI