Introduction to Google’s Most Powerful Multimodal Model Gemini, From a Technical Perspective
Last Updated on December 21, 2023 by Editorial Team
Author(s): Florian
Originally published on Towards AI.
On December 6, 2023, Google released its largest and most powerful multimodal model, Gemini.
Gemini achieves understanding and inference of various inputs through multimodal pretraining. It is the first model to surpass human experts on multimodal benchmarks and demonstrates outstanding performance in code understanding, generation, and more.
Google’s technical report[1] consists of 62 pages, with the majority dedicated to model evaluation, references, and a list of contributors. There are not many technical details discussed.
This article provides a brief introduction to this excellent multimodal model based on the valuable parts in the technical report.
Gemini includes three models of different scales, currently not open-source:
Ultra: The most powerful model that provides state-of-the-art performance in various highly complex tasks, including inference and multimodal tasks.Pro: A performance-optimized model with cost and latency as optimization goals, offering significant performance gains across various tasks.Nano: The most efficient model designed for running on devices. Nano has two versions, Nano-1 with 1.8 billion parameters and Nano-2 with 3.25 billion parameters, targeting low-memory and high-memory devices, respectively. Nano is built by distilling larger Gemini models and then quantizing them to 4 bits. Why build a nano model instead of directly using the cloud-based Ultra model? I think it’s probably because it aims to protect user privacy, so that devices like smartphones don’t have to send… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI