Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Specializing LLMs for Domains: RAG 🧵vs. Fine-Tuning ⚡
Latest   Machine Learning

Specializing LLMs for Domains: RAG 🧵vs. Fine-Tuning ⚡

Last Updated on February 27, 2024 by Editorial Team

Author(s): peterkchung

Originally published on Towards AI.

Specializing LLMs for Domains: RAG U+1F9F5vs. Fine-Tuning U+26A1

Read time ~6 minutes

Large language models are revolutionizing workflows, with new and bigger breakthroughs emerging every day.

However, oftentimes, the larger foundational models provide generic and often misinformed results when applied to specific domains where the user is already well-versed or perhaps even an expert.

When it comes to domain-specific mastery, two techniques have emerged as the prominent development approaches to amplify the performance of LLMs: Retrieval-Augmented Generation (RAG) and Fine-Tuning.

In this post, we’ll explain the basic processes and requirements for both techniques and the major considerations when deciding which to employ.

Retrieval-Augmented Generation (RAG)

RAG is systemized few-shot prompting, meaning when we prompt an LLM we provide a few examples or references along with our question to help shape the response. Given a specific domain, this allows an LLM to directly use the pieces of information most relevant to a user’s query to generate a response.

Adding RAG to LLM applications introduces a number of additional systems and procedures that need to be taken into consideration, namely the sourcing of documents, the parsing or chunking of those documents, embedding the chunks into vectors, storing and indexing the vectors, and then ultimately searching and retrieving those vectors at runtime. This is all in addition to the user’s query and the interaction with the LLM.

The diagram below from scriv.ai, which provides an excellent beginner-friendly reference on RAG, demonstrates the process at query time:

https://scriv.ai/guides/retrieval-augmented-generation-overview/

While the benefit of having factual, relevant ground truth provided to the LLM is powerful, the large amount of preprocessing and additional system architecture needed to run this system should not be overlooked.

Fine-Tuning

Model fine-tuning is adding, altering, or adapting the parameters of an existing model. Functionally, this allows a developer to embed specific pieces of information and language structure directly into the model through these updated weights.

Fine-tuning can be a very intensive and involved process. A full fine-tuning would include the gathering and processing of appropriate datasets, initializing and loading a pre-trained model, iterating through a training loop, and evaluating the output of the newly trained model. This process would be iterated over again until a satisfactory result is reached.

This process is very succinctly captured by Scribble Data in this graphic:

https://www.scribbledata.io/blog/fine-tuning-large-language-models

Recently, a set of parameter-efficient fine-tuning methods, most notably LoRA, have become more commonplace. Hugging Face’s PEFT library, for example, is used regularly to quickly and succinctly deploy LoRA fine-tunes on the Hugging Face Hub.

Major Considerations for RAG vs. Fine-Tuning

While both processes have been shown to demonstrate tremendous improvements to LLM applications, there are six key dimensions that you should consider when considering RAG, fine-tuning, or some combination of the two.

  • If your applications require accuracy and high degrees of factuality, RAG will provide a bigger performance boost.
  • If you need a specific style or kind of output from your input (i.e. question-answering, response brevity, structured outputs), fine-tuning is the route you need for your application.
  • For interpretability of responses and answer auditing, RAG provides the clearest benefits for users, as cited sources can easily be provided alongside response generation.
  • If the ground truth data you are working is dynamic in nature, i.e. it has the potential to change, shift, or grow over time, RAG will be better suited to capture these evolutions
  • For performance, RAG generally requires more setup time and has greater system complexity, given the additional architecture requirement. It will generally run slower in production because of the additional retrieval steps. Fine-tuning complexity can vary dramatically depending on how in-depth fine-tuning the developer wants to take. Once the training is complete, however, performance and inference latency will be faster, all things being equal.
  • In terms of cost, fine-tuning will generally lead to higher upfront costs (time, money, computing) but lower production and maintenance costs for your applications.

These changes are captured in the table below for easier reference:

Tradeoffs between RAG vs. Fine-Tuning

Mix and Match

While both RAG and fine-tuning allow the development of domain-specific applications, their provided benefits are not direct overlaps. In many cases, an application can be well-served by having elements of both applied.

In fact, in a recent research paper entitled β€œRAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture”, the authors found:

Our results show the effectiveness of our dataset generation pipeline in capturing geographic-specific knowledge, and the quantitative and qualitative benefits of RAG and fine-tuning. We see an accuracy increase of over 6 p.p. when fine-tuning the model and this is cumulative with RAG, which increases accuracy by 5 p.p. further.

Ultimately, the decision between RAG, fine-tuning, or some combination of the two comes down to the tradeoffs between cost, time, and performance for the application.

And that’s it. Hopefully, you found this helpful! Please don’t hesitate to reach out with any questions.

Thanks for reading!

Peter Chung is the founder and principal engineer of Innova Forge, a Machine Learning development studio working with enterprise customers and startups to develop and deploy ML and LLM applications.

References & Resources

RAG vs Finetuning β€” Which Is the Best Tool to Boost Your LLM Application? https://towardsdatascience.com/rag-vs-finetuning-which-is-the-best-tool-to-boost-your-llm-application-94654b1eaba7

How do domain-specific chatbots work? An Overview of Retrieval Augmented Generation (RAG). https://scriv.ai/guides/retrieval-augmented-generation-overview/

Fine-tuning Large Language Models: Complete Optimization Guide. https://www.scribbledata.io/blog/fine-tuning-large-language-models

PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware. https://huggingface.co/blog/peft

Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation). https://magazine.sebastianraschka.com/p/practical-tips-for-finetuning-llms

RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture. https://arxiv.org/abs/2401.08406

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. https://arxiv.org/abs/2005.11401

LoRA: Low-Rank Adaptation of Large Language Models. https://arxiv.org/abs/2106.09685

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓