Some Technical Notes About Phi-3: Microsoft’s Marquee Small Language Model
Last Updated on May 1, 2024 by Editorial Team
Author(s): Jesus Rodriguez
Originally published on Towards AI.
I recently started an AI-focused educational newsletter, that already has over 165,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:
TheSequence | Jesus Rodriguez | Substack
The best source to stay up-to-date with the developments in the machine learning, artificial intelligence, and data…
thesequence.substack.com
The term small language model(SLM) has been gaining traction in the world of generative AI. The term was originally coined by Microsoft after publishing a paper with a catchy title: “Textbooks is all You Need” that challenged this assumption by creating a small coding language model trained solely in textbook quality datasets. The paper unveiled the Phi model that was able to outperform much larger alternatives. That was followed by the release of the Phi-2 models later last year, which showed Microsoft’s commitment to continue this line of work. Last week, Microsoft unveiled Phi-3, which, although not that small anymore, outperforms models 25 times its size.
Recent advancements in artificial intelligence have been driven by the global trend of scaling up to increasingly larger models and datasets. Microsoft Research has been at the forefront, evolving their language models significantly over the past five years. Initially, their models started with around a billion parameters, but today, they encompass up to a trillion parameters. These larger models typically exhibit better performance due to what is known as scaling laws. Nonetheless, these scaling laws have been challenged by the advent of cutting-edge language models which offer new ways to interact with data.
For instance, Microsoft’s previous models utilized a mix of web data filtered through language model mechanisms and synthetic data created by language models themselves. This approach allowed even smaller models to achieve the performance levels of much larger counterparts. A prime example was their phi-2 model, which, with 2.7 billion parameters, matched models 25 times its size. In their recent release, Microsoft introduced the phi-3-mini model with 3.8 billion parameters trained on even larger and more sophisticated datasets than its predecessors. Remarkably, the phi-3-mini is efficient enough to run on modern smartphones.
Here, you can see Phi-3 running on an iPhone, generating 12 tokens per second.
Architecture
The phi-3-mini employs a transformer decoder architecture with a standard context length of 4,000 tokens. Additionally, Microsoft introduced a variant with an extended context length of 128,000 tokens to cater to more demanding processing tasks. The model builds on design principles similar to those of the Llama-2 model and shares the same tokenizer, facilitating compatibility with existing software developed for the Llama-2 series. Phi-3-mini features 32 layers, 32 attention heads, and a hidden dimension of 3072.
Another model in this series, the phi-3-small with 7 billion parameters, uses a different tokenizer optimized for multilingual capabilities and has an 8,000-token default context length. It incorporates innovative attention mechanisms to enhance memory efficiency, crucial for maintaining performance over longer contexts.
Training
Microsoft Research’s training regimen for these models diverges from traditional scaling approaches by focusing on the quality of training data over sheer quantity. They utilize a two-phase training process. The first phase employs a wide array of web-sourced data to instill a broad understanding of general knowledge and language. The second phase refines this with a focus on logical reasoning and specialized skills, integrating both refined web data and synthetic data.
The following chart shows the effect of the data efficiency in Phi-3 relative to Llama-2.
Post-training
After the initial training phases, the phi-3-mini underwent further refinement through supervised fine-tuning and direct preference optimization. These processes were aimed at enhancing specific capabilities like mathematics, coding, and logical reasoning, while also addressing robustness and safety to ensure the model’s reliability as an AI assistant.
The Results
Microsoft Research’s testing of the Phi-3 models across various benchmarks demonstrated excellent performance, highlighting their utility in diverse applications.
Phi Anywhere
Due to their reduced size, Phi-3 models are particularly suited for environments with limited computing resources. Phi-3-mini is especially adaptable for on-device use, benefiting from optimizations such as ONNX Runtime, which enhances its cross-platform functionality. The smaller size of these models not only makes them easier and cheaper to fine-tune and customize but also improves their operational efficiency and reduces latency. The extended context capabilities of these models allow them to handle and process extensive text-based information, making them effective for tasks that require deep reasoning and analysis.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI