The Microsoft Phi-3-Mini is Mighty Impressive
Last Updated on May 1, 2024 by Editorial Team
Author(s): Vatsal Saglani
Originally published on Towards AI.
Phi-3-Mini is a great local LLM (SLM) for developing compute-efficient GenAI-powered applications
Image generated by ChatGPT.
The Phi-3-Mini language model was recently released by Microsoft AI. This model comes in the category of a small language model (SLM) that offers many of the same capabilities offered by LLMs. The only difference is that the SLMs are smaller in size and are trained on small amounts of data.
According to Microsoft, the Phi-3 models are the most capable and cost-effective small language models (SLMs) available. Theyβve released the Phi-3βmini-4k-instruct and Phi-3-mini-128k-instruct models and both of them are entirely open source without any restrictions. And this is the very first time we have a small language model with a 128k context length.
In the upcoming weeks, theyβll also release the Phi-3-small (7B) and Phi-3-medium (14B).
Letβs look at the official Phi-3 model benchmarks.
Benchmarks from Microsoft AI Blog
The tiny Phi-3-Mini model is mighty impressive and literally punches above its weight. The Phi-3-Mini model is as good if not better than an entry-level LLM β 7B or 8B LLMs.
The Phi-3-Mini model only has 3.8 Billion parameters and beats the likes of Gemma-7B, Mistral-7B, and Llama-3β8B on almost all reasoning, math, and code generation benchmarks.
The only benchmark the Phi-3-Mini model does not perform well is the factual knowledge benchmark. The reason… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI