Comparative Analysis of Fine-Tuning LLaMA 2 and LLaMA 3 Models with RTX 4090
Last Updated on July 13, 2024 by Editorial Team
Author(s): Lorentz Yeung
Originally published on Towards AI.
Picture generated by Dall-E. Two digital llamas racing against each other, one labeled βGen 2β and the other βGen 3β
When beginning LLM operations, a key question is which model to use. As a fan of LLaMA models, I wondered if LLaMA 3 is necessarily better than LLaMA 2. This analysis compares their practical performance in fine-tuning tasks, particularly under constraints like limited vRAM and budget.
My PC setup includes an Alienware R16 with an Intel(R) Core(TM) i7β14700KF 3.40 GHz processor, and an NVIDIA GeForce RTX 4090 GPU. I previously used an RTX 3070 but found it too slow and prone to out-of-vRAM issues. My NVIDIA-SMI version is 550.76.01, the Driver Version is 552.44, and my CUDA Version is 12.4.
The 2 models under review are LLaMA 2 and LLaMa 3. LLaMA 2 is available in Hugging Face here: meta-llama/Llama-2β7b Β· Hugging Face, which is a 7b model. LLaMa 3 can be found here: meta-llama/Meta-Llama-3β8B Β· Hugging Face, 8 billion parameter model.
I referenced Luca Massaronβs notebook on Kaggle for the base script, modifying it to run locally on my RTX 4090 and to accommodate the two models.
We fine-tuned the models for financial sentiment analysis. The dataset we are employing is the FinancialPhraseBank dataset,… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI