Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Why Small Language Models Make Business Sense
Artificial Intelligence   Latest   Machine Learning

Why Small Language Models Make Business Sense

Author(s): Paul Ferguson, Ph.D.

Originally published on Towards AI.

Image generated by Gemini AI

Small Language Models are changing the way businesses implement AI by providing solutions that operate efficiently using standard hardware.

Despite the attention given to massive AI models, these compact alternatives demonstrate that in the real world, smaller often means smarter, faster, and more cost-effective.

What are SLMs

Small Language Models (SLMs) are much like the Large Language Models (LLMs) that we are all familiar with (e.g. ChatGPT), except that they are smaller in size.

Model Scale

  • SLMs typically range from a few million to a few billion parameters
  • LLMs are much larger, with tens of billions to trillions of parameters
  • Example: Meta’s Llama 2 comes in 7B (7 billion parameters) and 70B (70 billion parameters) variants, each serving different needs

Practical Implementation

  • Can run on standard computing hardware
  • Suitable for mobile devices and edge computing
  • Adaptable to specific business needs through fine-tuning
SLM vs LLM Comparison

Increased Adoption

One of the key reasons for this is that they are ideally suited to some of the smaller datasets that are more prevalent within typical businesses. They can also be more easily fine-tuned to the exact needs of a specific company and their own data.

Clem Delangue, CEO of Hugging Face, predicts that up to 99% of AI use cases could be addressed using SLMs.

Prominent firms, like Microsoft and Google along with IBM and Meta have introduced SLMs, such as Phi Trio and Gemma as well as compact versions of Llama models. Illustrating broad acceptance, within the industry.

Suitable Use Cases for LLMs vs SLMs

Key Advantages of Small Language Models

These compact models deliver impressive results with millions rather than billions of parameters.

Their benefits include:

  • Efficiency and Cost-Effectiveness: SLMs require significantly less computational power and memory compared to LLMs: this makes them faster to train, as well as more affordable to run long-term.
  • Domain-Specific Applications: SLMs perform well in tasks that require specialist knowledge, such as customer support chatbots, real-time translation, document summarisation, and IoT device operations. Their smaller size allows for easier fine-tuning on specific datasets, which allows them to excel, with particularly niche use cases.
  • Enhanced Privacy: Due to their compact nature, SLMs can operate locally (e.g., on mobile devices). This significantly enhances data protection by limiting the need for cloud based processing services.
  • Reduced Environmental Impact: Lower computing requirements lead to less energy consumption, in both training and inference processes. Which is a crucial factor, for companies aiming to achieve sustainability goals.

Technical Foundations

Three key strategies enable the creation of these models;

  • Knowledge Distillation: This process involves training a smaller model to mimic a larger one’s behaviour. For example, a customer service SLM might learn the most common support scenarios from a larger model while remaining compact enough to run on local servers.
  • Model Pruning: This technique involves removing less important connections within the model: through careful pruning, models can often maintain most of their performance while significantly reducing their size.
  • Quantisation: This method optimises how the model stores and processes numerical data. Instead of using high-precision numbers (which require more storage), quantisation uses smaller number formats, which maintain acceptable accuracy. For example, decreasing precision, from 32 bits to 8 bits can notably shrink model size while generally preserving adequate performance levels, for various business uses

Understanding the Limitations

While SLMs offer many advantages, it’s important to understand some of their limitations:

Task Complexity

  • SLMs are like specialised tools, while LLMs are more like swiss army knives
  • They excel at specific tasks but may struggle with broader applications
  • Best suited for focused, well-defined business problems

Input Types

  • Most SLMs work with a single type of input (usually text)
  • Unlike larger models, they typically aren’t multimodal (can’t process images, audio, etc.)
  • For many business uses, this single-focus approach is actually beneficial

Context Window

  • SLMs have smaller context windows β€” the amount of text they can process at once
  • Example: SLMs like Llama 3.2 handle 128k tokens, while Gemini 1.5 processes 2 million tokens
  • Solutions like β€œchunking” help manage longer texts by breaking them into smaller pieces

The Road Ahead

SLMs are finding success in several key business applications:

  • Targeted Applications: Businesses with specific, limited-scope language tasks, such as customer feedback analysis or domain-specific queries, can benefit from the efficiency of SLMs.
  • Real-Time Processing: SLMs are well-suited for real-time interactions, such as chatbots and live translation services, as they deliver faster response times due to their smaller size.
  • Data Privacy Concerns: In industries like healthcare and finance, SLMs can process sensitive data locally, helping companies to comply with regulations such as GDPR or HIPAA.
  • AI Agents and Orchestration: SLMs excel as specialised AI agents, each handling specific tasks. Businesses can create systems of these agents working together, combining the efficiency of SLMs with the versatility of having multiple specialised components.

Making the Right Choice for Your Business

When deciding if SLMs are suitable for your company’s needs here are some factors to keep in mind:

  1. What is the specific task or problem you’re trying to solve?
  2. What are your computational resources and budget constraints?
  3. Do you have specific data privacy requirements?
  4. What is the expected volume and frequency of model usage?
  5. Do you need real-time processing capabilities?

SLMs represent practical AI implementation. They offer an effective balance of capability and efficiency for businesses seeking reliable, cost-effective AI solutions.

If you’d like to find out more about me, please check out www.paulferguson.me, or connect with me on LinkedIn.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓