Getting to Know AutoGen(Part2): How AI Agents Work Together
Last Updated on September 30, 2024 by Editorial Team
Author(s): Anushka sonawane
Originally published on Towards AI.
In Part 1, we went over the basics β what AI agents are, how they work, and why having multiple agents can really make a difference. That was just an introduction, setting the stage for whatβs next. Now, itβs time to take things up a level!
AI Agents, Assemble(Part 1)! The Future of Problem-Solving with AutoGen
Getting to Know AI Agents: How They Work, Why Theyβre Useful, and What They Can Do for You
pub.towardsai.net
In Part 2, letβs go deeper into AutoGen and how it helps these agents communicate with each other to get things done.
With AutoGen, the agents donβt just work alone. They can actually talk to each other to share information and solve problems together. This makes them much more powerful!
AutoGenβs agents come with two key features:
📍Conversable Agents: Agents that Talk to Each Other. They can share information, ask for help, or update each other, making teamwork easier and faster.
📍Customizable Agents: Agents You Can Customize. Some can write, others can code, and you can even include human help when needed.
Prerequisites
Before diving into the example, letβs make sure you have the following prerequisites covered:
1. AutoGen Setup: Ensure that you have AutoGen installed and ready to use in your environment.
pip install pyautogen
2. API Access: Youβll need API access to Large Language Models (LLMs), like OpenAIβs GPT or Gemini.
Hereβs how you can configure OpenAIβs GPT-4 for your agents:
Hereβs how you can configure Gemini for your agents:
Now that youβve set up the LLM configurations, all thatβs left is to add this configuration to your AutoGen agents. Itβs simple β just pass the llm_config
we defined earlier when creating the agents.
guide_gary = ConversableAgent(
"guide_Gary",
system_message="Hello, Iβm Guide Gary! I specialize in travel tips, destination recommendations, and hidden gems around the world.",
llm_config={"config_list": [{"model": "gpt-3.5-turbo", "temperature": 0.9, "api_key": "OPENAI_API_KEY"}]},
human_input_mode="NEVER",
)
tourist_tina = ConversableAgent(
"tourist_Tina",
system_message="Hi there, Iβm Tourist Tina! Iβm always on the lookout for exciting travel destinations and unique experiences.",
llm_config={"config_list": [{"model": "gpt-3.5-turbo", "temperature": 0.7, "api_key": "OPENAI_API_KEY"}]},
human_input_mode="NEVER",
)
result = tourist_tina.initiate_chat(guide_gary, message="Guide Gary, I'm planning a trip to Norway. Any must-see destinations?",
max_turns=3)
Hereβs what the output looks like:
If youβre excited to see this in details, Iβve put together a GitHub notebook that breaks it all down. Inside, youβll find:
- A list of LLMs (Large Language Models)
- A code executor
- A function and tool executor
- A component to keep humans in the loop
AutoGen-Agent/BasicsOfAutoGen.ipynb at main Β· anusonawane/AutoGen-Agent
Contribute to anusonawane/AutoGen-Agent development by creating an account on GitHub.
github.com
Language Models (LLMs):
- The agent can use different language models to chat in natural language. This means it can understand and respond to your questions or requests in a friendly way, whether you use simple phrases or more complex sentences.
Code Executor:
- It can run code when necessary. This is great for tasks that need calculations or automating certain processes, making it a handy helper for technical tasks.
Function and Tool Executor:
- The agent can use pre-set functions and tools to perform specific actions, like finding information, doing calculations, or calling up other online services. This makes it really efficient at handling various requests.
Human-in-the-Loop:
- You can set it up to involve people in the conversation. This means the agent can ask for your input or feedback, ensuring that it gets things right and works well with you.
📍AutoGen makes it easy for AI agents to work together, and thatβs pretty exciting! These Conversable Agents can chat with each other, sharing information to get tasks done faster.
📍The AssistantAgent helps by creating and improving Python code based on what you need, so you donβt have to start from scratch. On the other hand, the UserProxyAgent keeps you in the loop. It asks for your input and can run code automatically when necessary.
📍Thanks to the auto-reply feature, these agents can chat with each other and handle tasks on their own while still keeping you in the loop. Plus, you can customize them to fit your specific needs, whether itβs for travel advice or coding help.
The image below shows how these agents interact and work together.
Well, thatβs the end of Part 2! I hope this gave you a clearer picture of how AutoGen works and how these agents can collaborate to make life easier.
If youβd like to follow along with more insights or discuss any of these topics further, feel free to connect with me:
Looking forward to chatting and sharing more ideas!
Wait, Thereβs More!
If you enjoyed this, youβll love my other blogs! 🎯
Unlocking the MLOps Secrets: Expertly Navigating Deployment, Maintenance, and Scaling
Hey, tech explorers!
medium.com
Enhancing RAG Efficiency through LlamaIndex Techniques
LLAMA INDEX AND RAG BASICS WITH DETAILED EXPLANATION
medium.com
Protect Your Python Projects: Avoid Direct setup.py Invocation for Ultimate Code Safeguarding!
Itβs time to say goodbye to setup.py complexities and embrace efficient Python packaging with build frontends.
pub.towardsai.net
Until next time,
Anushka!
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI