Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!


Mixture of Experts
Artificial Intelligence   Latest   Machine Learning

Mixture of Experts

Last Updated on April 11, 2024 by Editorial Team

Author(s): Louis-François Bouchard

Originally published on Towards AI.

Mixtral 8x7B explained

Originally published on, read it 2 days before on my blog!

What you know about is wrong. We are not using this technique because each model is an expert on a specific topic. In fact, each of these so-called experts is not an individual model but something much simpler.

Thanks to Jensen, we can now assume that the rumour of GPT-4 having 1.8 trillion parameters is true…

1.8 trillion is 1,800 billion, which is 1.8 million million. If we could find someone to process each of these parameters in a second, which would basically be to ask you to do a complex multiplication with values like these, it would take them 57,000 years! Again, assuming you can do that in a second. If we do this all together, calculating one parameter per second with 8 billion people, we could achieve this in 2.6 days. Yet, transformer models do this in milliseconds.

This is thanks to a lot of engineering, including what we call a “mixture of experts.”

Unfortunately, we don’t have much detail on GPT-4 and how OpenAI built it, but we can dive more into a very similar and nearly as powerful model by Mistral called Mixtral 8x7B.

Image credit: Mistral AI blog.

By… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓