Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

From 10 to 10,000 Users: The AI Agent Scaling Playbook
Latest   Machine Learning

From 10 to 10,000 Users: The AI Agent Scaling Playbook

Author(s): Dhruv Tiwari

Originally published on Towards AI.

From 10 to 10,000 Users: The AI Agent Scaling Playbook

You built an AI Agent that shimmered with promise, a dazzling MVP that aced every demo. It was brilliant in controlled settings. But then, the real world hit.

McDonald’s AI drive-thru butchering orders, or self-driving taxis freezing on busy streets? These aren’t minor glitches; they’re stark reminders of the vast chasm between a promising prototype and an AI system capable of handling the sheer chaos of real-life scale.

Scaling isn’t just about more servers; it’s about fundamentally re-engineering for robustness, continuous learning, ethical safeguards, and an operational backbone that adapts at an industrial pace. Without this strategic playbook, even the most groundbreaking AI MVP remains a fragile dream, vulnerable to collapsing under the very success it was designed to achieve.

Problems that will arise when building for 10,000+ users

1. Security

Adversarial use is a huge pain when you expect your Agent to be used by 1000s of users. They will try jailbreaking, prompt injection attacks to try to take control of your agent and make it do certain tasks or reveal sensitive knowledge.

Also, for companies the Agents might respond against the goals of the firm which is becoming a problem for enterprise deployments.

2. Hallucination

LLMs have a knowledge space and if the prompt makes the LLM go near a knowledge space where its been trained less or not trained at all they will result in a hallucination which is a made up answer, because they have to give an answer whether correct or not.

3. Latency

Infrastructure is a problem, especially for those GPU-hungry models. If your AI Agents have a high latency, then chances are high that you might never be able to make a sale.

Figuring out which models to use and how to decrease the latency is dependent on the Architecture that you’ve built. It's not about calling an LLM and receiving a response, nowadays Agents are complex orchestrations who need to be architected keeping in mind Inputs, Outputs, and Target User

4. Observability

AI Agents cannot be debugged and are unintuitive to work with for developers who expect a program to return a specific output. Debugging or tracing can be done to a certain extent on these black boxes by Traces, Evals, and prompts.

5. Reproducibility and Alignment

How do you know how an AI agent is going to perform? There’s no guarantee your agents will perform consistently in production

6. Large context window

Keeping the entire context window for every interaction is a computational and financial bottleneck, especially as conversations grow longer and user numbers increase.

Deploying to production needs a collection of techniques to be used to stay resilient and robust. Running Evals, adding observability, memory and guardrails to ensure you don’t ship rogue to production or lose trust in your customers.

Evaluation (Evals)

Scaling AI agents to production environments with 10,000+ users demands a robust evaluation framework. Success hinges not only on raw accuracy but also on efficiency, reliability, robustness, cost, and user experience.

Bechmarks

Benchmarks assess AI models on datasets, tasks, workflows, or through human evaluation, such as blind testing or checking accuracy on math problems. When developing scalable AI agents, benchmarks guide model selection.

Eg. for a problem-solving task in an agent workflow, I would consult a mathematics benchmark to choose the top-performing model while ensuring other factors are also taken into consideration.

Human in the Loop

This is a manual way of evaluating LLMs and improving their performance. Humans judge the output of LLMs in metrics, they can judge in the following ways.

  1. pointwise scoring
  2. pairwise comparison
  3. chain of thought

LLM as a Judge

Using LLMs to grade the answers given by agents is and automated way of human evaluation, these LLMs have to be prompted precisely as they are fragile to prompts. Use this for scale and ease, HITL works better for critical tasks.

To improve the performance of our llm judge we can collect 10–20 human annotated examples and compare the performance of these llms to that of humans, and since they are sensitive to prompting we can tune the prompting to increase the correlation of humans and llms!

Metrics

Metrics quantify an AI agent's performance, robustness, and adaptability in the real world. They can help you better understand the Agent’s responses and use them to tune the agent in the right direction. It helps companies tune their agents to give an ethical response while also ensuring the latency is perfect.

Core metrics for evals

  • Latency — Affects user experience; low latency means responsiveness.
  • Token usage — Correlates with cost and speed.
  • Success rate — Indicates if the agent meets its objectives.
  • Robustness — Ensures the agent isn’t brittle under unexpected inputs.
  • Adaptability — Important for long-term usefulness.
  • Reliability — Builds trust with repeatable outcomes.
  • Cost — Impacts feasibility and scalability.

Observability

Continuously collecting, analyzing, and visualizing detailed data about agent behavior, decision paths, tool usage, and system interactions. This goes beyond traditional monitoring by capturing not just performance metrics (like latency and errors) but also the reasoning, memory, and dynamic workflows unique to AI agents

Tools that are used in this space

After the MLOps hype in 2020, most of the startups have started to jump over to do the same thing in this space.

  • AgentOps — Purpose-built for multi-agent systems, easier to set up and concise.
  • Lanfuse — LLM focused apps, open source.
  • LangSmith — Deep LLM tracing, more fine-grained control compared to others.
  • Enterprises -> Arize, Datadog, and Dynatrace

Guardrails

AI agents require implementing comprehensive architecture-level practices that ensure safety, reliability, and compliance at scale.

Guardrails have to be added across the pipeline

  • Secure from prompt injection
  • Limit tool use by using IAM, access control
  • Store false data in the memory
  • Optimize for unsafe goals
  • Safe outputs

AWS offers Bedrock guardrails, which can be integrated with CrewAI and LangGraph

Guardrails AI is an open-source framework for various types of guardrails that can be added to your AI project quickly.

Model armor is similar to Bedrock guardrails but is in the GCP.

Memory

It helps in coherence of chats, when we are talking to an actual person, they remember our previous chats as a summary, and our current chat clearly. Similarly we try to architect memory in the same way by having:

Short-term memory

Simple RAG based on the current session of the human with the LLM. The RAG stores all chats in a condensed form as short term memories.

Long-term memory

This is usually a database with condensed, summarized memories based on past chats.

It helps LLM understand and adapt to its environment and feel more human, not like a machine that forgets what you told it the previous day. This can also aid in storing user preferences, which will personalize the chats as more data/memories are stored.

Entity memory

LLMs do not understand people and their relationships. Take for example, “Sarah from accounting always needs expense reports in PDF format, not Excel,” but for the LLM, “Sarah” is just another human.

The game changes when we associate certain memories with Sarah, as then the LLM knows the persona of Sarah.

Contextual memory

The LLMs do not understand these various types of memories, and they just need a prompt that contains various memories. All the previous types of memories are actually components of contextual memory.

When you go and apply these techniques of memory you would need to choose how, as it won’t be productive to reinvent the wheel.

Providers of Memory

  • Mem0.ai
  • Letta (prev. MemGPT)

Memory also helps in performance as the LLMs won’t need to recalculate or reason, and they will have access to their long-term memory, which contains a gist of past chats and reasoning.

Now, how would you know the performance of this beautiful memory actually working in brutal production with 1000s of users? Observability!

That's it!

The AI agent gold rush is real, but most teams will crash and burn at scale. The difference between success and failure isn’t your model choice or fancy architecture; it’s whether you built proper evaluation, observability, and memory systems before you needed them. Don’t be the team scrambling to add guardrails after your agent goes rogue in production

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.