Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Make Any* LLM fit Any GPU in 10 Lines of Code
Artificial Intelligence   Data Science   Latest   Machine Learning

Make Any* LLM fit Any GPU in 10 Lines of Code

Last Updated on December 21, 2023 by Editorial Team

Author(s): Dr. Mandar Karhade, MD. PhD.

Originally published on Towards AI.

An ingenious way of running models larger than the VRAM of the GPU. It may be slow but it freaking works!

Who has enough money to spend on a GPU with more than 24 Gigabytes of VRAM? Especially when we just want to test it out, take it for a ride, and play with it by running it locally! We are the tinkerers! And there was no practical way to run larger models on a local machine!

Make that Model fit that GPU

Who said you must load and process all 96 layers of GPT3 like large language models at once? AirLLM came up with a genius way of processing layers separately and carrying the calculations across layers one by one- Which means for a 70B parameter model, the bottleneck for memory is the biggest layer, with 4096 parameters or 16384. That's a pretty easy number to compute in a small amount of memory, then store the outputs of one layer and use it to calculate the next one! Keep doing it over and over during the forward and backward propagation.

okay, so quickly, here is how the code goes

pip install airllm# alternate way of installation # pip install -i https://pypi.org/simple/ airllm

set up the inference engine using airllm

# For running LLaMA-2 from airllm import AirLLMLlama2 # For running Qwen from airllm import AirLLMQWen# For… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓