Comparative Analysis of Fine-Tuning LLaMA 2 and LLaMA 3 Models with RTX 4090
Last Updated on July 13, 2024 by Editorial Team
Author(s): Lorentz Yeung
Originally published on Towards AI.

Picture generated by Dall-E. Two digital llamas racing against each other, one labeled ‘Gen 2’ and the other ‘Gen 3’
When beginning LLM operations, a key question is which model to use. As a fan of LLaMA models, I wondered if LLaMA 3 is necessarily better than LLaMA 2. This analysis compares their practical performance in fine-tuning tasks, particularly under constraints like limited vRAM and budget.
My PC setup includes an Alienware R16 with an Intel(R) Core(TM) i7–14700KF 3.40 GHz processor, and an NVIDIA GeForce RTX 4090 GPU. I previously used an RTX 3070 but found it too slow and prone to out-of-vRAM issues. My NVIDIA-SMI version is 550.76.01, the Driver Version is 552.44, and my CUDA Version is 12.4.
The 2 models under review are LLaMA 2 and LLaMa 3. LLaMA 2 is available in Hugging Face here: meta-llama/Llama-2–7b · Hugging Face, which is a 7b model. LLaMa 3 can be found here: meta-llama/Meta-Llama-3–8B · Hugging Face, 8 billion parameter model.
I referenced Luca Massaron’s notebook on Kaggle for the base script, modifying it to run locally on my RTX 4090 and to accommodate the two models.
We fine-tuned the models for financial sentiment analysis. The dataset we are employing is the FinancialPhraseBank dataset,… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI Academy
We Build Enterprise-Grade AI. We'll Teach You to Master It Too.
15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.
Start free — no commitment:
→ 6-Day Agentic AI Engineering Email Guide — one practical lesson per day
→ Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages
Our courses:
→ AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.
→ Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.
→ AI for Work — Understand, evaluate, and apply AI for complex work tasks.
Note: Article content contains the views of the contributing authors and not Towards AI.