Run Gemini using the OpenAI API
Last Updated on December 10, 2024 by Editorial Team
Author(s): Thomas Reid
Originally published on Towards AI.
Googleβs Gemini model is now OpenAI AI Compatible
This member-only story is on us. Upgrade to access all of Medium.
In a recent announcement, Google confirmed that its Gemini large language model is now mostly compatible with the OpenAI API framework.
There are a couple of exceptions, so structured outputs and image uploading are currently limited, for instance.
However, chat completions, including function calls, streaming, regular question/response, and embeddings, work just fine.
Image by AI (Dalle-3)For the rest of this article, Iβll provide some examples of Python code to show how it works.
The model weβll use is Gemini 1.5 Flash, which is a fast and versatile performer across a diverse variety of tasks. Not only that, but itβs very low cost to use. Check the Google Gemini docs for more info.
As a pre-requisite, youβll need a Google Account and Gemini API key, which you can get by clicking on the link below and following the instructions.
Edit description
aistudio.google.com
Okay, letβs get started with our coding. First, Iβm developing using Windows WSL2 Ubuntu. If youβre a Windows user, I have a comprehensive guide on installing WSL2, which you can find here.
Before developing like this, I always create a separate Python development environment where I can install any software needed and experiment with coding. Now, anything… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI