Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Getting to Know AutoGen(Part2): How AI Agents Work Together
Data Science   Latest   Machine Learning

Getting to Know AutoGen(Part2): How AI Agents Work Together

Last Updated on September 30, 2024 by Editorial Team

Author(s): Anushka sonawane

Originally published on Towards AI.

Credits

In Part 1, we went over the basics β€” what AI agents are, how they work, and why having multiple agents can really make a difference. That was just an introduction, setting the stage for what’s next. Now, it’s time to take things up a level!

AI Agents, Assemble(Part 1)! The Future of Problem-Solving with AutoGen

Getting to Know AI Agents: How They Work, Why They’re Useful, and What They Can Do for You

pub.towardsai.net

In Part 2, let’s go deeper into AutoGen and how it helps these agents communicate with each other to get things done.

With AutoGen, the agents don’t just work alone. They can actually talk to each other to share information and solve problems together. This makes them much more powerful!

AutoGen’s agents come with two key features:

📍Conversable Agents: Agents that Talk to Each Other. They can share information, ask for help, or update each other, making teamwork easier and faster.

📍Customizable Agents: Agents You Can Customize. Some can write, others can code, and you can even include human help when needed.

Prerequisites

Before diving into the example, let’s make sure you have the following prerequisites covered:

1. AutoGen Setup: Ensure that you have AutoGen installed and ready to use in your environment.

pip install pyautogen

2. API Access: You’ll need API access to Large Language Models (LLMs), like OpenAI’s GPT or Gemini.

Here’s how you can configure OpenAI’s GPT-4 for your agents:

Here’s how you can configure Gemini for your agents:

Now that you’ve set up the LLM configurations, all that’s left is to add this configuration to your AutoGen agents. It’s simple β€” just pass the llm_config we defined earlier when creating the agents.

guide_gary = ConversableAgent(
"guide_Gary",
system_message="Hello, I’m Guide Gary! I specialize in travel tips, destination recommendations, and hidden gems around the world.",
llm_config={"config_list": [{"model": "gpt-3.5-turbo", "temperature": 0.9, "api_key": "OPENAI_API_KEY"}]},
human_input_mode="NEVER",
)

tourist_tina = ConversableAgent(
"tourist_Tina",
system_message="Hi there, I’m Tourist Tina! I’m always on the lookout for exciting travel destinations and unique experiences.",
llm_config={"config_list": [{"model": "gpt-3.5-turbo", "temperature": 0.7, "api_key": "OPENAI_API_KEY"}]},
human_input_mode="NEVER",
)

result = tourist_tina.initiate_chat(guide_gary, message="Guide Gary, I'm planning a trip to Norway. Any must-see destinations?",
max_turns=3)

Here’s what the output looks like:

If you’re excited to see this in details, I’ve put together a GitHub notebook that breaks it all down. Inside, you’ll find:

  • A list of LLMs (Large Language Models)
  • A code executor
  • A function and tool executor
  • A component to keep humans in the loop

AutoGen-Agent/BasicsOfAutoGen.ipynb at main Β· anusonawane/AutoGen-Agent

Contribute to anusonawane/AutoGen-Agent development by creating an account on GitHub.

github.com

Language Models (LLMs):

  • The agent can use different language models to chat in natural language. This means it can understand and respond to your questions or requests in a friendly way, whether you use simple phrases or more complex sentences.

Code Executor:

  • It can run code when necessary. This is great for tasks that need calculations or automating certain processes, making it a handy helper for technical tasks.

Function and Tool Executor:

  • The agent can use pre-set functions and tools to perform specific actions, like finding information, doing calculations, or calling up other online services. This makes it really efficient at handling various requests.

Human-in-the-Loop:

  • You can set it up to involve people in the conversation. This means the agent can ask for your input or feedback, ensuring that it gets things right and works well with you.

📍AutoGen makes it easy for AI agents to work together, and that’s pretty exciting! These Conversable Agents can chat with each other, sharing information to get tasks done faster.

📍The AssistantAgent helps by creating and improving Python code based on what you need, so you don’t have to start from scratch. On the other hand, the UserProxyAgent keeps you in the loop. It asks for your input and can run code automatically when necessary.

📍Thanks to the auto-reply feature, these agents can chat with each other and handle tasks on their own while still keeping you in the loop. Plus, you can customize them to fit your specific needs, whether it’s for travel advice or coding help.

The image below shows how these agents interact and work together.

Credit

Well, that’s the end of Part 2! I hope this gave you a clearer picture of how AutoGen works and how these agents can collaborate to make life easier.

If you’d like to follow along with more insights or discuss any of these topics further, feel free to connect with me:

Looking forward to chatting and sharing more ideas!

Wait, There’s More!
If you enjoyed this, you’ll love my other blogs! 🎯

Unlocking the MLOps Secrets: Expertly Navigating Deployment, Maintenance, and Scaling

Hey, tech explorers!

medium.com

Enhancing RAG Efficiency through LlamaIndex Techniques

LLAMA INDEX AND RAG BASICS WITH DETAILED EXPLANATION

medium.com

Protect Your Python Projects: Avoid Direct setup.py Invocation for Ultimate Code Safeguarding!

It’s time to say goodbye to setup.py complexities and embrace efficient Python packaging with build frontends.

pub.towardsai.net

Until next time,
Anushka!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓