Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Building Intelligent Agents from Scratch: A Journey into LLM-Powered Autonomy
Latest   Machine Learning

Building Intelligent Agents from Scratch: A Journey into LLM-Powered Autonomy

Last Updated on August 7, 2024 by Editorial Team

Author(s): Anoop Maurya

Originally published on Towards AI.

Photo by Rock'n Roll Monkey on Unsplash

In recent years, the advent of large language models (LLMs) has revolutionized the field of artificial intelligence, making it possible for machines to understand and generate human-like text with unprecedented accuracy. These advancements have paved the way for the creation of autonomous agents powered by LLMs, capable of performing complex tasks through natural language understanding and interaction. This article delves into the process of building such intelligent agents from scratch, without relying on high-level frameworks, to unlock their full potential.

Why LLM Agents?

The demand for LLM agents arises from the limitations of traditional rule-based systems and the increasing complexity of tasks in modern applications. While traditional systems can handle specific and well-defined tasks, they often fall short in dealing with the nuances and variability of natural language. LLM agents, leveraging the vast knowledge and contextual understanding embedded within large language models, offer more flexible and intelligent solutions.

The Need for LLM Agents

  1. Enhanced User Interaction: LLM agents can engage in natural, conversational interactions, making them ideal for customer service, virtual assistants, and educational tools.
  2. Complex Problem Solving: These agents can handle diverse queries and tasks by drawing on their extensive training data, suitable for research, data analysis, and decision support systems.
  3. Automation and Efficiency: LLM agents can automate routine tasks such as scheduling, email management, and information retrieval, significantly enhancing productivity.
  4. Scalability: LLM agents can be deployed across various platforms and industries without extensive reprogramming, offering scalable solutions for businesses.
  5. Continuous Learning and Adaptation: By fine-tuning with domain-specific data, LLM agents can adapt to new information and changing requirements, ensuring their continued relevance and effectiveness.

Setting Up the Environment

To embark on the journey of building an LLM agent, start by setting up your environment. Ensure you have Python installed on your system and install the necessary libraries:

pip install python-dotenv groq requests

Create a .env file in your project directory to securely store your API key:

GROQ_API_KEY=your_api_key_here

The Agent Class with Tool Calling Capabilities

We will define an Agent class to interact with the language model and integrate tool-calling capabilities.

Import Libraries and Load Environment Variables:

from dotenv import load_dotenv
import os
from groq import Groq
import requests

# Load environment variables from .env file
load_dotenv()

Define the Tool Class:

class Tool:
def __init__(self, name, function):
self.name = name
self.function = function

def execute(self, *args, **kwargs):
return self.function(*args, **kwargs)

Define the Agent Class:

class Agent:
def __init__(self, client: Groq, system: str = "") -> None:
self.client = client
self.system = system
self.messages: list = []
self.tools = {}
if self.system:
self.messages.append({"role": "system", "content": system})

def add_tool(self, tool: Tool):
self.tools[tool.name] = tool

def __call__(self, message=""):
if message:
self.messages.append({"role": "user", "content": message})
response = self.execute()
if response.startswith("CALL_TOOL"):
parts = response.split()
tool_name = parts[1]
params = parts[2:]
result = self.tools[tool_name].execute(*params)
self.messages.append({"role": "tool", "content": result})
return result
else:
self.messages.append({"role": "assistant", "content": response})
return response

def execute(self):
completion = self.client.chat.completions.create(
model="llama3-70b-8192", messages=self.messages
)
return completion.choices[0].message.content

Tools

Calculator Tool :

def calculator(a, b, operation):
a = float(a)
b = float(b)
if operation == "add":
return str(a + b)
elif operation == "subtract":
return str(a - b)
elif operation == "multiply":
return str(a * b)
elif operation == "divide":
return str(a / b)
else:
return "Invalid operation"

calc_tool = Tool("calculator", calculator)

Web Search Tool:

def web_search(query):
response = requests.get(f"https://api.duckduckgo.com/?q={query}&format=json&pretty=1")
if response.status_code == 200:
return response.json()["results"]
else:
return "Failed to fetch results"

search_tool = Tool("web_search", web_search)

Using the Agent with Tools:

os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")
client = Groq()
agent = Agent(client, system="You are a helpful assistant.")

# Add tools to the agent
agent.add_tool(calc_tool)
agent.add_tool(search_tool)

# Call the web search tool
response = agent("CALL_TOOL web_search what is weather today in new york")
print(response)

Output:

image by author

Conclusion

Building an AI agent from scratch without frameworks offers a deeper understanding of the underlying processes and greater control over the implementation. This guide demonstrated how to create a simple conversational agent, integrate tool-calling capabilities, and interact with various tools using basic libraries and a hypothetical language model API. By expanding on this foundation, you can develop more sophisticated agents tailored to specific tasks and domains, unleashing the transformative potential of LLM-powered autonomy.

Additional Resource:

Code :https://github.com/imanoop7/Agents-from-Scratch

Feel free to explore these resources, and happy learning!
If you have any more questions, feel free to ask. 😊

If you liked this article and you want to support me:

  1. Clap my article 10 times; that will really help me out.👏
  2. Follow me on Medium and subscribe for Free to get my latest article🫶

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓