Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Make Any* LLM fit Any GPU in 10 Lines of Code
Artificial Intelligence   Data Science   Latest   Machine Learning

Make Any* LLM fit Any GPU in 10 Lines of Code

Last Updated on December 21, 2023 by Editorial Team

Author(s): Dr. Mandar Karhade, MD. PhD.

Originally published on Towards AI.

An ingenious way of running models larger than the VRAM of the GPU. It may be slow but it freaking works!

Who has enough money to spend on a GPU with more than 24 Gigabytes of VRAM? Especially when we just want to test it out, take it for a ride, and play with it by running it locally! We are the tinkerers! And there was no practical way to run larger models on a local machine!

Make that Model fit that GPU

Who said you must load and process all 96 layers of GPT3 like large language models at once? AirLLM came up with a genius way of processing layers separately and carrying the calculations across layers one by one- Which means for a 70B parameter model, the bottleneck for memory is the biggest layer, with 4096 parameters or 16384. That's a pretty easy number to compute in a small amount of memory, then store the outputs of one layer and use it to calculate the next one! Keep doing it over and over during the forward and backward propagation.

okay, so quickly, here is how the code goes

pip install airllm# alternate way of installation # pip install -i https://pypi.org/simple/ airllm

set up the inference engine using airllm

# For running LLaMA-2 from airllm import AirLLMLlama2 # For running Qwen from airllm import AirLLMQWen# For… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓