Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

How to use LLMs locally with ollama and Python
Latest   Machine Learning

How to use LLMs locally with ollama and Python

Last Updated on March 14, 2024 by Editorial Team

Author(s): Andrea D’Agostino

Originally published on Towards AI.

This article will walk you through using ollama, a command line tool that allows you to download, explore and use Large Language Models (LLM) on your local PC, whether Windows, Mac or Linux, with GPU support.

Photo by Paul Lequay on Unsplash

This article will walk you through using ollama, a command-line tool that allows you to download, explore, and use Large Language Models (LLM) on your PC.

By the end of this article, you will be able to launch models locally and query them via Python thanks to a dedicated endpoint provided by Ollama. Models will be fully customizable.

You’ll learn

What ollama is and why is it convenient to useHow to use ollama’s commands via the command lineHow to use ollama in a Python environment

Let’s get started.

ollama is an open-source tool that allows easy management of LLM on your local PC.

It supports virtually all of Hugging Face’s newest and most popular open source models and even allows you to upload new ones directly via its command-line interface to populate ollamas’ registry.

It is available both via GitHub and through the official website, where you can download the versions for Windows, Mac, and Linux.

The GitHub project is available here:

Get up and running with Llama 2,… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓