
Run Gemini using the OpenAI API
Last Updated on December 10, 2024 by Editorial Team
Author(s): Thomas Reid
Originally published on Towards AI.
Google’s Gemini model is now OpenAI AI Compatible
This member-only story is on us. Upgrade to access all of Medium.
In a recent announcement, Google confirmed that its Gemini large language model is now mostly compatible with the OpenAI API framework.
There are a couple of exceptions, so structured outputs and image uploading are currently limited, for instance.
However, chat completions, including function calls, streaming, regular question/response, and embeddings, work just fine.
For the rest of this article, I’ll provide some examples of Python code to show how it works.
The model we’ll use is Gemini 1.5 Flash, which is a fast and versatile performer across a diverse variety of tasks. Not only that, but it’s very low cost to use. Check the Google Gemini docs for more info.
As a pre-requisite, you’ll need a Google Account and Gemini API key, which you can get by clicking on the link below and following the instructions.
Edit description
aistudio.google.com
Okay, let’s get started with our coding. First, I’m developing using Windows WSL2 Ubuntu. If you’re a Windows user, I have a comprehensive guide on installing WSL2, which you can find here.
Before developing like this, I always create a separate Python development environment where I can install any software needed and experiment with coding. Now, anything… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI