Fine-tune Mixtral-8x7B Quantized with AQLM (2-bit) on Your GPU
Last Updated on March 17, 2024 by Editorial Team
Author(s): Benjamin Marie
Originally published on Towards AI.
A surprisingly good and efficient alternative to QLoRA for fine-tuning very large models
Generated with DALL-E
Mixtral-8x7B is one of the best open LLMs. It is also very challenging to fine-tune it on consumer hardware. The model occupies 96.8 GB of memory when fully loaded. Fine-tuning would require even more memory to store the optimizer states and training batches. For instance, an H100 GPU with 80 GB of RAM wouldnβt be enough.
In this situation, QLoRA with 4-bit quantization is an appealing solution. It divides the model size by 4 while reducing the size of the optimizer states by fine-tuning only a LoRA adapter on top of the model.
Yet, even with QLoRA, we still need 32 GB of GPU memory to fine-tune Mixtral-8x7B.
But what if we could fine-tune Mixtral-8x7B quantized to a lower precision?
For instance, we can quantize Mixtral-8x7B with AQLM to 2-bit with minimal degradation of the modelβs performance. But are AQLM models easy to fine-tune?
In this article, I show how to fine-tune Mixtral-8x7B quantized with AQLM using only 16 GB of GPU RAM. In other words, we only need a $500 GPU to fine-tune Mixtral. I also discuss how to optimize the fine-tuning hyperparameters to further reduce memory consumption while maintaining a good performance. To my surprise, fine-tuning a 2-bit Mixtral is fast… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI