Mistral AI: (8x7b) Releases First Ever Opensource Model Of Experts (MoE) Model
Last Updated on December 21, 2023 by Editorial Team
Author(s): Dr. Mandar Karhade, MD. PhD.
Originally published on Towards AI.
Mistral continues their commitment to the Open Source World by releasing the first 56 billion token model (8 models, 7 billion tokens each) via a Torrent !!
A few days ago, we came to know that GPT4 was a Model of Experts model, which allegedly included 8 models of 220 billion parameters, each making it a ginormous 1.76 Trillion parameter effective size. To refresh your memory, I wrote an article about it.
Author(s): Dr. Mandar Karhade, MD. PhD. Originally published on Towards AI. The secret "Model of Experts" is out; let'sβ¦
towardsai.net
Long story short, and oversimplified, the model of experts or MoE works as an orchestra of the models. There is a conductor model that decides which of the models can answer a given question or respond to a given context. The selected model spits out the output and is shared as a response.
There are other ways of conducting/orchestrating, like getting responses from all models and then selecting the right one, or weighing the responses from various models and then sharing back the response, etc.. but the core concept is common! There is a Meta Model that acts as a conductor to select appropriate responses across many models (hence the Mixture of Models). These models are trained in specific functions or the facets of the language in a way that the overall performance of the model is far superior… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI