Mixtral 8x7B, The New World’s Best Pound-For-Pound AI
Last Updated on December 21, 2023 by Editorial Team
Author(s): Ignacio de Gregorio
Originally published on Towards AI.
The Power of Combining Experts
Just like any other week these days, a new open-source model has come out.
But this time, it’s different.
Emulating one of the core features that turned OpenAI’s GPT-4 into the world’s most advanced model (with the excuse of Gemini Ultra), Mistral’s new model, Mixtral 8×7B, is the first open-source Sparse Mixture-of-Experts foundation model that is as impressive as it is highly performant, making it the best open-source model to date.
But it isn’t stopping there, as it is up to six times faster than models of its size, making it the best model in the world in terms of performance relative to cost and speed.
Europe seems to have found its AI champion, and today we are going to make sense of this engineering marvel.
Most insights I share in Medium have previously been shared in my weekly newsletter, TheTechOasis.
If you want to be up-to-date with the frenetic world of AI while also feeling inspired to take action or, at the very least, to be well-prepared for the future ahead of us, this is for you.
U+1F3DDSubscribe belowU+1F3DD to become an AI leader among your peers and receive content not present in any other platform, including Medium:
The newsletter to stay ahead of the curve in… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI