How to Build With the Chrome’s Latest Built-in AI
Last Updated on August 6, 2024 by Editorial Team
Author(s): Nithur
Originally published on Towards AI.
Setting up Gemini Nano in Your Browser and Building a Practical Use Case With It
Image generated by the author with playgroundai/playground-v2–1024px-aesthetic on Replicate
Gemini Nano, built-in AI by Google is picking up steam lately. Google first announced built-in AI in this year’s I/O event. The model was subsequently launched in the newest Canary release and Dev channel.
The current default for building AI features for the web is server-side solutions. OpenAI and Anthropic are among the main players dominating the market. Other key players like Google seemed lagging behind. But it is changing now.
My first impression of Gemini Nano is finesse.
Local, private, and offline models are the future. We already have some tools that provide this to a certain extent, like LM Studio and Ollama. But ordinary users don’t bother downloading models to run things locally. That’s where built-in AI comes in.
You can bring top-notch LLM capabilities to your users without compromising their privacy, with no middleman involved, and you can deliver a snappy user experience because you are eliminating network round trips.
In some cases, you can build offline first products where your users can access built-in AI even when they are not connected to the internet.
You need at least Windows 10 or MacOS 13, integrated GPU, and 22GB of storage (but the model doesn’t take… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI