JAMBA, the First Powerful Hybrid Model is Here
Last Updated on April 8, 2024 by Editorial Team
Author(s): Ignacio de Gregorio
Originally published on Towards AI.
Toward a Subquadratic Future
For almost six years, nothing has beaten the Transformer, the heart of all Generative AI models.
However, due to its excessive costs, many have tried to dethrone it, to no avail.
But we can finally hear the winds of change.
Not to substitute the Transformer, but to create hybrids, a new generation of Large Language Models that offer the best of both worlds, ultra performance with high efficiency.
And we finally have our very first production-grade model, Jamba.
This insight and others have mostly been previously shared in my weekly newsletter, TheTechOasis.
If you want to be up-to-date with the frenetic world of AI while also feeling inspired to take action or, at the very least, to be well-prepared for the future ahead of us, this is for you.
U+1F3DDSubscribe belowU+1F3DD
The newsletter to stay ahead of the curve in AI
thetechoasis.beehiiv.com
In technology, thereβs always a trade-off. And in the case of the Transformer, itβs a big one.
And although we arenβt going into the technical details of the Transformer for the sake of the length, hereβs the gist.
Models like ChatGPT, Gemini, or Claude are all based on a concatenation of Transformer blocks:
Each of these blocks contains two things:
An attention layerA Feedforward layer (MLP in the image)
The former enforces the… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI