Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

I Compared PEFT-Lora vs Full Fine-Tune on Open AI’s Whisper
Latest   Machine Learning

I Compared PEFT-Lora vs Full Fine-Tune on Open AI’s Whisper

Author(s): Tim Cvetko

Originally published on Towards AI.

An experiment adjourning for the effectiveness of LoRA on LLM

The need for increasingly domain-applicable LLMs is causing a turmoil of advances to surpass the limitations of the truly “large” language models. At the expense of generalisability, fine-tuned models are being developed to cover niche reasoning, namely Bloomberg GPT, Finance GPT, etc. Now, algorithms like LoRa make LLM fine-tuning possible on local machines.

Photo by Sander Sammy on Unsplash

With that in mind, I wanted to put PEFT-LoRa to the test. Here’s the experiment:

Compare an LLM fine-tuned with PEFT-LoRA to a fully fine-tuned Whisper model along these dimensions:

Total Training TimeInference SpeedTotal Benchmark AccuracyNumber of Parameters

This should give us a bit of perspective on the algorithm's effectiveness. Here’s what this article contains:

Intuitive Understanding of PEFT LoRa Fine-TuningOverview of the Training Process(+Code+Stats)

Who is this blog post useful for? ML Researchers, but also VCs, consultants, etc.

How advanced is this post? Anybody previously acquainted with ML terms should be able to follow along.

Replicate my code: GitHub or Colab

(Skip to training if you know this stuff)

Def.

PEFT = parameter-efficient fine-tuning. Performing full-finetuning can lead to catastrophic forgetting because it changes all parameters on the model. Since PEFT only updates a small subset of parameters, it’s more robust against this catastrophic forgetting effect. PEFT is a balance between retaining… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓