Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

Fine-Tuning Open-Source LLMs for Text-to-SQL: Setting Up a Machine for Fine-Tuning LLMs on WSL2 (article 2 of 3)
Latest   Machine Learning

Fine-Tuning Open-Source LLMs for Text-to-SQL: Setting Up a Machine for Fine-Tuning LLMs on WSL2 (article 2 of 3)

Author(s): Lorentz Yeung

Originally published on Towards AI.

Fine-Tuning Open-Source LLMs for Text-to-SQL: Setting Up a Machine for Fine-Tuning LLMs on WSL2 (article 2 of 3)
Meta’s LlaMa and Alibaba’s Qwen are both finetuned to reach their limit. Photo by Gabriel Istvan on Unsplash

If you want to review Part 1 (Project Overview and Motivations), please click here: Fine-Tuning Open-Source LLMs for Text-to-SQL: Project Overview and Motivations (article 1 of 3) | by Lorentz Yeung | Jul, 2025 | Towards AI. If you want to jump directly to Part 3 (Results and Key Takeaways), please go here: https://pub.towardsai.net/fine-tuning-open-source-llms-for-text-to-sql-results-and-key-takeaways-article-3-of-3-2b887951edda?source=friends_link&sk=542b07803d0a04150922f6c36f41e25e

In this article, I’ll walk you through the process of setting up a machine for fine-tuning large language models (LLMs) like Llama 3.1 8B Instruct for a text-to-SQL task using Guided Reward Policy Optimization (GRPO). I performed this setup on a high-performance system with an NVIDIA RTX 4090 GPU, running Windows 11 with WSL2 (Ubuntu 22.04). I’ll detail my machine configuration, the steps to install and verify dependencies, the issues I encountered, and how I resolved them. This guide is intended for anyone looking to set up a similar environment for machine learning tasks, and I’ll share my experience to help you avoid common pitfalls.

My Machine Setup

I’m working on a powerful system designed for deep learning tasks:

  • GPU: NVIDIA GeForce RTX 4090 (24GB VRAM)
  • OS: Windows 11 with WSL2 (Ubuntu 22.04)
  • Driver Version: NVIDIA Driver 566.24 (supports CUDA 12.7)
  • Initial CUDA Toolkit: None (installed during the process)
  • Conda: Used for environment management

The RTX 4090 is a beast for deep learning, offering 24GB of VRAM, which is sufficient for fine-tuning an 8B parameter model like Llama 3.1 8B Instruct with 4-bit quantization and LoRA. WSL2 allows me to leverage Linux tools on Windows, using the Windows NVIDIA driver for GPU acceleration.

Step 1: Verify NVIDIA Driver and CUDA Support

Checking the Driver

Since WSL2 uses the Windows NVIDIA driver, I first checked the driver version and CUDA support using nvidia-smi on Windows Command Prompt:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.75 Driver Version: 566.24 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:01:00.0 On | Off |
| 0% 29C P8 6W / 450W | 1707MiB / 24564MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
  • Driver Version: 566.24
  • CUDA Version Supported: 12.7
  • Memory Usage: 1707MiB / 24564MiB (plenty of VRAM available)

The driver supports CUDA 12.7, meaning it can run applications compiled with CUDA 12.7 or earlier (e.g., 12.1). This gave me flexibility in choosing the CUDA toolkit version.

Step 2: Install CUDA Toolkit on WSL2

Initial Attempt and Issue

I started by trying to install CUDA 12.1.0, as it was compatible with PyTorch 2.2.0 (required for the fine-tuning task). I ran:

wget https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda_12.1.0_531.14_linux.run

But I encountered an error:

--2025-04-14 13:40:23-- https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda_12.1.0_531.14_linux.run
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 23.48.165.33, 23.48.165.13
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|23.48.165.33|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2025-04-14 13:40:25 ERROR 404: Not Found.

The URL was invalid, likely because NVIDIA had updated or removed the file by April 2025.

Solution: Find a Working CUDA Version

I resolved this by trying a different version of CUDA 12.1.0 with a slightly different driver suffix:

wget https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda_12.1.0_530.30.02_linux.run
sudo sh cuda_12.1.0_530.30.02_linux.run

This worked, and the installation completed successfully.

Verify CUDA Installation

I verified the installation:

nvcc --version

Output:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on ...
Cuda compilation tools, release 12.1, V12.1.0

I also set up environment variables:

echo 'export PATH=/usr/local/cuda-12.1/bin${PATH:+:${PATH}}' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc
source ~/.bashrc

Step 3: Create a Conda Environment and Install PyTorch

Choosing a Python Version

I needed to install PyTorch 2.2.0 with CUDA 12.1 (cu121). PyTorch 2.2.0 (released February 2024) supports Python 3.8 to 3.11. I chose Python 3.11 for its performance improvements and longevity (supported until October 2027), but I was ready to fall back to 3.10 if dependency issues arose.

conda create --name cu121torch220 python=3.11 -y
conda activate cu121torch220

This created an environment named cu121torch220 with Python 3.11.

Install PyTorch

I installed PyTorch 2.2.0 with CUDA 12.1:

pip install torch==2.2.0 --index-url https://download.pytorch.org/whl/cu121

Verify PyTorch

I verified the installation:

import torch
print(torch.__version__)
print(torch.cuda.is_available())
print(torch.version.cuda)
print(torch.cuda.get_device_name(0))

Output:

2.2.0
True
12.1
NVIDIA GeForce RTX 4090

PyTorch was correctly installed and detected the GPU!

Step 4: Install Remaining Dependencies

With PyTorch set up, I installed the dependencies required for fine-tuning:

pip install transformers==4.43.0 datasets==2.20.0 trl==0.8.6 peft==0.11.1 accelerate==0.31.0 bitsandbytes==0.43.1
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install huggingface_hub==0.23.4

These packages include:

  • transformers: For loading and fine-tuning the Llama model.
  • datasets: For loading the text-to-SQL dataset.
  • trl: For GRPO training.
  • peft and bitsandbytes: For efficient fine-tuning with LoRA and 4-bit quantization.
  • unsloth: For optimized fine-tuning on NVIDIA GPUs.
  • huggingface_hub: For model access and uploading to Hugging Face.

Potential Issue: Dependency Conflicts

I noted that if Python 3.11 caused issues with bitsandbytes or unsloth (e.g., compilation errors), I would recreate the environment with Python 3.10:

conda deactivate
conda env remove -n cu121torch220
conda create --name cu121torch220 python=3.10 -y
conda activate cu121torch220
pip install torch==2.2.0 --index-url https://download.pytorch.org/whl/cu121

Then reinstall the dependencies. However, Python 3.11 worked fine in my case.

Step 5: Log in to Hugging Face

To access the Llama 3.1 8B Instruct model and later upload my fine-tuned model, I logged into Hugging Face:

huggingface-cli login

I entered my Hugging Face token (obtained from https://huggingface.co/settings/tokens) and ensured I had access to meta-llama/Meta-Llama-3.1-8B-Instruct.

Conclusion

Setting up my machine for fine-tuning an LLM on WSL2 was a multi-step process that required careful attention to CUDA versions, driver compatibility, and Python environments. I encountered issues like an invalid CUDA download URL and a syntax error in the Conda command, but I resolved them by finding a working CUDA installer and correcting the command syntax. My final setup uses CUDA 12.1.0, PyTorch 2.2.0, and Python 3.11 in a Conda environment, with all dependencies installed and verified.

This setup is now ready for fine-tuning Llama 3.1 8B Instruct for a text-to-SQL task using GRPO. In a future article, I’ll share the fine-tuning process, evaluation results. If you’re setting up a similar system, I hope this guide helps you navigate the challenges I faced!

Here is the dependency of my virtual env for your convenience:

accelerate==1.6.0
aiohappyeyeballs==2.6.1
aiohttp==3.11.16
aiosignal==1.3.2
annotated-types==0.7.0
anyio==4.9.0
attrs==25.3.0
bitsandbytes==0.45.5
certifi==2025.1.31
charset-normalizer==3.4.1
contourpy==1.3.2
cut-cross-entropy==25.1.1
cycler==0.12.1
datasets==2.21.0
dill==0.3.8
distro==1.9.0
docstring-parser==0.16
filelock==3.13.1
fonttools==4.57.0
frozenlist==1.5.0
fsspec==2024.5.0
h11==0.14.0
hf-transfer==0.1.9
httpcore==1.0.8
httpx==0.27.0
huggingface-hub==0.30.2
idna==3.10
jinja2==3.1.4
jiter==0.9.0
kiwisolver==1.4.8
markdown-it-py==3.0.0
markupsafe==2.1.5
matplotlib==3.10.1
mdurl==0.1.2
mpmath==1.3.0
multidict==6.4.3
multiprocess==0.70.16
networkx==3.3
numpy==2.1.2
openai==1.51.2
pandas==2.2.3
peft==0.13.0
pillow==11.2.1
propcache==0.3.1
protobuf==3.20.3
pyarrow==19.0.1
pyarrow-hotfix==0.6
pydantic==2.11.3
pydantic-core==2.33.1
pyparsing==3.2.3
python-dotenv==1.0.1
pytz==2025.2
pyyaml==6.0.2
regex==2024.11.6
requests==2.32.3
rich==14.0.0
safetensors==0.5.3
sentencepiece==0.2.0
shtab==1.7.2
sniffio==1.3.1
sqlglot==25.1.0
sqlparse==0.5.1
sympy==1.13.1
tokenizers==0.21.1
torch==2.6.0
tqdm==4.67.1
transformers==4.48.2
triton==3.2.0
trl==0.15.2
typeguard==4.4.2
typing-inspection==0.4.0
tyro==0.9.18
tzdata==2025.2
unsloth==2025.3.19
unsloth-zoo==2025.3.17
urllib3==2.4.0
xxhash==3.5.0
yarl==1.19.0

And if you’ve enjoyed the article — I’d be stoked if you’d consider tossing a coin my way via PayPal or becoming a GitHub Sponsor. Your support keeps the train running. Let’s echo James Bently’s saying, let’s keep contributing to the AI community and benefit humanity.

Support the Author

If you found this article useful, please consider donating to my PayPal tip jar!

Pay Pui Yeung using PayPal.Me

Go to PayPal.Me/entzyeung and enter the amount. It’s safer and more secure. Don’t have a PayPal account? No problem.

paypal.me

Your support means the universe for me and allows me to stay on this lonely road of exploration — keep experimenting, writing articles, making tutorials, …

Thank you!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.