Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

AI Agents with AutoGen on Azure Functions
Data Science   Latest   Machine Learning

AI Agents with AutoGen on Azure Functions

Last Updated on February 12, 2025 by Editorial Team

Author(s): Naveen Krishnan

Originally published on Towards AI.

This comprehensive guide walks you through building and deploying an AutoGen 0.4 multi-agent application on Azure Functions

Image Source: unspalsh.com

The accompanying code for this tutorial is: here

The serverless computing and advanced AI applications, integrating powerful frameworks with cloud-native solutions is key to building scalable, intelligent systems. In this post, we explore how to run an Autogen 0.4 multiagent app on an Azure Function App.

In this post, we’ll cover:

  • An introduction to Autogen and multiagent architectures
  • Why Azure Functions make an excellent platform for such applications
  • A detailed walkthrough of the sample code
  • How the individual components (like the Bing search tool and Azure OpenAI client) work together
  • Deployment instructions on Azure Functions
  • Use cases

1. Understanding Autogen and Multiagent Architectures

What Is Autogen?

Autogen is a framework designed to facilitate multiagent interactions using AI-driven chat agents. It provides the building blocks to create agents that can perform tasks, coordinate with one another, and leverage external tools. This modular approach means that you can develop complex workflows where different agents contribute specialized functionalities. For example, one agent might handle information retrieval (such as web search), while another focuses on synthesizing the results and generating a coherent answer.

The Multiagent Concept

A multiagent system consists of several autonomous agents that interact to solve problems collaboratively. In our example, the system comprises two agents:

  • Bing Search Agent: Uses a tool function to perform web searches and retrieve relevant information.
  • Report Agent: Analyzes the search results and generates a final, coherent output based on the gathered data.

The agents communicate over a group chat system that follows a round-robin sequence. This design ensures that each agent has the opportunity to contribute to the overall task, resulting in a balanced and comprehensive final output.

Why Multiagent Architectures?

Multiagent architectures offer several advantages:

  • Specialization: Each agent can be designed to excel in a specific task.
  • Scalability: New agents and tools can be added as the system grows in complexity.
  • Resilience: The system can continue functioning even if one component experiences an issue.
  • Enhanced Creativity: Collaborative agent interactions often yield more robust and nuanced outputs.

For more detailed insights on multiagent systems, I refer you to my previous discussion on the Autogen framework in my earlier blog post.

2. Why Use Azure Functions for Autogen Applications?

Serverless Advantages

Azure Functions is a serverless compute service that lets you run event-driven code without managing infrastructure. This model is particularly appealing for AI applications because:

  • Scalability: Azure Functions automatically scale based on the load. This means your multiagent system can handle fluctuating workloads without manual intervention.
  • Cost Efficiency: You pay only for the compute resources you use. For applications that run intermittently or require bursts of processing, this can lead to significant cost savings.
  • Ease of Deployment: Azure Functions integrate seamlessly with other Azure services and external APIs, making it straightforward to deploy and monitor your application.

The Use Case in Focus

Imagine a scenario where a user sends a query via an HTTP request. The system then orchestrates multiple agents:

  • One agent queries Bing for the latest information on the topic.
  • Another agent synthesizes this information into a final, human-readable report.

This real-time collaboration between agents, powered by Azure Functions, showcases the potential for building responsive, intelligent assistants.

3. Deep Dive into the Code

This section demonstrates how to run the Autogen 0.4 multiagent app on Azure Functions. Let’s break it down step by step.

3.1. Setting Up the Azure Function App

In this article, this article guides you on how to use command-line tools to create a Python function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions

3.2. Importing Autogen Modules

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
from autogen_core.tools import FunctionTool
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient

Here, we import key components from the Autogen framework:

  • AssistantAgent: Represents an AI agent capable of performing tasks.
  • RoundRobinGroupChat: Manages the multiagent communication, ensuring that each agent participates in a fair sequence.
  • FunctionTool: A wrapper that enables us to expose Python functions (such as our Bing search) as callable tools within the agent ecosystem.
  • AzureOpenAIChatCompletionClient: Integrates the Azure OpenAI service to provide advanced language capabilities to our agents.

3.3. Implementing the Bing Search Function

A core component of our multiagent system is the ability to retrieve information from the web. The following function handles this task:

def bing_search(query: str, num_results: int = 1, max_chars: int = 500) -> list:
api_key
= "<<your-bing-search-key-here>>"
endpoint = "https://api.bing.microsoft.com/v7.0/search"
if not api_key:
raise ValueError("API key not found in environment variables")
headers
= {"Ocp-Apim-Subscription-Key": api_key}
params = {"q": query, "count": num_results}
response = requests.get(endpoint, headers=headers, params=params)
if response.status_code != 200:
print(response.json())
raise Exception(f"Error in API request: {response.status_code}")
results
= response.json().get("webPages", {}).get("value", [])
def get_page_content(url: str) -> str:
try:
response
= requests.get(url, timeout=10)
soup = BeautifulSoup(response.content, "html.parser")
text = soup.get_text(separator=" ", strip=True)
words = text.split()
content = ""
for word in words:
if len(content) + len(word) + 1 > max_chars:
break
content +
= " " + word
return content.strip()
except Exception as e:
print(f"Error fetching {url}: {str(e)}")
return ""
enriched_results = []
for item in results:
body = get_page_content(item["url"])
enriched_results.append(
{"title": item["name"], "link": item["url"], "snippet": item["snippet"], "body": body}
)
time.sleep(1) # Be respectful to the servers
return enriched_results

This function encapsulates the necessary logic to integrate web search into our multiagent ecosystem, enabling the Bing Search Agent to retrieve contextual data that the Report Agent will later analyze.

3.4. Setting Up the Azure OpenAI Chat Client

One of the highlights of this implementation is the use of Azure OpenAI for natural language processing:

az_model_client = AzureOpenAIChatCompletionClient(
azure_deployment="gpt-4o",
model="gpt-4o",
api_version="2024-08-01-preview",
azure_endpoint="https://<<azure-open-ai-service-name>>.openai.azure.com/",
api_key="<<your-azure-openai-api-key-here>>"
)

This client becomes the backbone for our agents’ ability to process natural language, generate responses, and interact intelligently with users.

3.5. Creating and Configuring the Agents

Next, we wrap our Bing search function into a tool that our agents can call. Then, we instantiate two agents: one for web searches and another for generating reports.

3.6. Wrapping the Bing Search Function

bing_search_tool = FunctionTool(
bing_search,
description="Search bing for information, returns results with a snippet and body content"
)

By wrapping the bing_search function with FunctionTool, we enable the Bing Search Agent to invoke it as needed during a conversation. The description helps the agent understand what the tool does and when to use it.

3.7. Configuring the Bing Search Agent

bing_search_agent = AssistantAgent(
name="bing_Search_Agent",
model_client=az_model_client,
tools=[bing_search_tool],
description="Search Bing for information, returns top 1 result with a snippet and body content",
system_message="You are a helpful AI assistant. Solve tasks using your tools."
)

3.8. Configuring the Report Agent

report_agent = AssistantAgent(
name="Report_Agent",
model_client=az_model_client,
description="Generate output only based on search results",
system_message="You are a research analyst, analyze the search result and provide answer for user query. When you are done with generating the final output, reply with TERMINATE."
)

The Report Agent plays the role of synthesizing information. It processes the output from the Bing Search Agent.Produces a coherent answer for the user’s query. Uses a predefined command (β€œTERMINATE”) to signal that the report is complete.

Together, these two agents form a team that handles the end-to-end process of information retrieval and synthesis.

3.9. Managing the Agent Conversation: Round Robin Group Chat

Multiagent collaboration is orchestrated by the RoundRobinGroupChat:

team = RoundRobinGroupChat([bing_search_agent, report_agent], max_turns=3)

The agents are given turns in a fixed, cyclic order. This ensures that each agent contributes its part to the conversation without any one agent dominating the dialogue. The group chat is designed to handle asynchronous operations. The conversation stream is processed iteratively, collecting messages as they are generated by the agents. This approach ensures a balanced collaboration between the agents, allowing them to focus on their specialized tasks while working toward a common goal.

3.10. Defining the Azure Function Route

The final piece of the code is the HTTP-triggered Azure Function. This function is responsible for handling incoming requests, invoking the agent team, and returning the final result.

@app.route(route="autogen_func")
async def autogen_func(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
team = RoundRobinGroupChat([bing_search_agent, report_agent], max_turns=3)
result = ""
async for message in team.run_stream(task=name):
if isinstance(message, TextMessage) and message.source == "Report_Agent":
result += str(message.content)
return func.HttpResponse(f"Result: {result}")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)

4. Deployment on Azure Functions

Deploy your function to Azure using either the Azure CLI or directly from the Azure portal. The Azure CLI command might look like:

func azure functionapp publish <your-function-app-name>

5. Challenges and Considerations

While the integration of Autogen with Azure Functions is powerful, there are challenges to consider:

Asynchronous Operations

  • Handling Concurrency:
    Asynchronous code, particularly with multiagent interactions, can become complex. Ensuring that agents communicate seamlessly without deadlocks or race conditions requires careful design and testing.
  • Streaming Responses:
    The use of asynchronous streams (as shown in the agent conversation) necessitates proper error handling and timeout management to prevent hung processes.

6. Conclusion

The combination of Autogen 0.4’s multiagent framework with the flexibility of Azure Functions offers a powerful solution for building scalable, intelligent applications. By breaking down complex tasks into specialized agents β€” such as our Bing Search and Report Agents β€” you can build systems that not only retrieve and analyze data in real time but also present it in a coherent, user-friendly format.

In this guide, we covered everything from the underlying architecture and detailed code walkthrough to deployment. Whether you are building an intelligent search assistant, a research tool, or a dynamic content generator, this integration provides a robust foundation on which to innovate.

By leveraging serverless architectures, you gain the benefits of automatic scaling, cost efficiency, and ease of integration with other Azure services. As the AI landscape continues to evolve, these principles will remain critical in developing applications that are both powerful and resilient.

I hope this comprehensive guide helps you understand the process and inspires you to explore further possibilities with multiagent systems on Azure Functions.

Happy coding, and here’s to building smarter, more scalable AI solutions!

Thank You!

Thanks for taking the time to read my story! If you enjoyed it and found it valuable, please consider giving it a clap (or 50!) to show your support. Your claps help others discover this content and motivate me to keep creating more.

Also, don’t forget to follow me for more insights and updates on AI. Your support means a lot and helps me continue sharing valuable content with you. Thank you!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓