Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Building a Simple AI Agent With OpenAI Tools
Latest   Machine Learning

Building a Simple AI Agent With OpenAI Tools

Last Updated on June 4, 2024 by Editorial Team

Author(s): Varad Khonde

Originally published on Towards AI.

Photo by Andrew Neel on Unsplash

OPENAI has recently added tool calling functionality that can connect the language models to external tools. As given in this article https://platform.openai.com/docs/guides/function-calling,

In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call one or many functions. The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code.

In this article, we will explore how to create a simple AI agent that can access different tools using OpenAI tool calling functionality.

Part 0: Defining a function to generate JSON schema

To use the tool called, the input must be in json format as described in the article given above. To create this format, let us first define a function generate_tool_description as follows.

import json
import inspect
from pydantic import BaseModel


def generate_tool_description(func, input_model: BaseModel):
"""Generate a tool description JSON for a given function."""

signature = inspect.signature(func)
func_doc = func.__doc__ if func.__doc__ else f"Function to {func.__name__}"

desc = input_model.model_json_schema()["properties"]
properties = {}
for param, d in desc.items():
properties[param] = {
"type" : d["type"],
"description" : d["description"]
}

tool_description = {
"type": "function",
"function": {
"name": func.__name__,
"description": func_doc,
"parameters": {
"type": "object",
"properties": properties
}
}
}


return tool_description

This function is saved in utils.py module. This is just a helper function to create the required format as required.

Make sure you save your OpenAI API key in .env file as follows:

OPENAI_API_KEY = "<your_api_key>"

Part 1: Import the libraries

import os
import json
from pydantic import BaseModel, Field
from dotenv import load_dotenv
from openai import OpenAI
from utils import generate_tool_description

Part 2 : Define parameters

load_dotenv()
assert os.environ.get("OPENAI_API_KEY")

MODEL = "gpt-3.5-turbo"

SYSTEM = "You are an AI assistant."

MAX_ITERATIONS = 10

Here, MAX_ITERATIONS variable adds a rate limit to the API requests, after 10 calls, no further call is made. This helps avoiding large number of requests in case of any bug or error.

Part 3: Defining tools

Now, we will define two tools: one for multiplying two numbers and the other for getting the weather information of a location.

For each tool, we will also have to define a class which inherits BaseModel class to provide the information about the function parameters.

Multiply tool:


class InputMultiply(BaseModel):
a: int = Field(description="first value")
b: int = Field(description= "second value")

def multiply(a: int, b: int) -> int:
"""Multiply two integers and returns the result integer"""
return json.dumps({"output": a * b})


def multiply_executor(args):
return multiply(args["a"], args["b"])

multiply_tool = generate_tool_description(multiply, InputMultiply)

Here, InputMultiply contains info about the function parameters.

The variable multiply_tool will look like this:

{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers and returns the result integer",
"parameters": {
"type": "object",
"properties": {
"a" : {
"type" : "integer",
"description" : "first value"
},
"b":{
"type" : "integer",
"description" : "second value"
}
}
}
}

}

Similarly, we will define a tool for weather information. Note that it is just a dummy function, it does not connect to any real-time API to fetch real-time data. This is created just for implementation. (Feel free to customize it to add real-time data)


class InputGetweather(BaseModel):
location : str = Field(description="The location to check the weather.")

def get_current_weather(location:str)-> str:
"""Get the weather at the given location."""
return "{ 'temperature': '33 degree', 'type':'sunny'}"

def get_current_weather_executor(args):
return get_current_weather(args["location"])

get_weather_tool = generate_tool_description(func=get_current_weather, input_model=InputGetweather)

The GPT models will return tool names if any tool is to be called. Hence, we create a dictionary which maps the names of the tools to the functions which must be executed for the corresponding tool.


available_functions = {
"multiply" : multiply_executor,
"get_current_weather" : get_current_weather_executor
}

tools = [multiply_tool, get_weather_tool]

Thus, if the model returns that tool β€˜multiply’ is to be called, accordingly, we will get multiply_executor function and invoke it.

Part 4: Function to call the API

Here, we create a simple function that will call the API and fetch the responses.

def get_response_from_openai(tools, messages , client = OpenAI()):
response = client.chat.completions.create(
model=MODEL,
messages=messages,
tools=tools,
tool_choice="auto"
)
return response

Part 5: Chat interface

Now create a chat function which will start the conversation for the input query. The function will call the API, parse the response and call the tools if necessary and stop when done.


def chat(user):
messages=[
{"role": "system", "content": SYSTEM},
{"role": "user", "content":user}
]
for _ in range(MAX_ITERATIONS):
response = get_response_from_openai(tools=tools, messages=messages)

#print(f"{response.choices[0].message.content}")

if response.choices[0].finish_reason == "stop":
print(f"\nFinal answer:{response.choices[0].message.content}")
break

if response.choices[0].finish_reason == 'tool_calls':
tool_calls = response.choices[0].message.tool_calls
messages.append(response.choices[0].message)
print(f"Tool call : \n{response.choices[0].message}")

for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = available_functions[function_name]
function_args = json.loads(tool_call.function.arguments)
function_response = function_to_call(
function_args
)

print(f"Fucntion response :{function_response}")

messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
)
else:
print(f"{response.choices[0].finish_reason}")
  • The response.choices[0].finish_reason contains information of whether to call a tool or finish the conversation. If a tools is to be called, its value is β€œtool_calls”, otherwise, its value is set to β€œstop”.
  • If a tool is to be used, the response contains the all the information for calling the tool: the function name and the function parameters.
  • We extract the function name and parameters and retrieve the corresponding function to execute from available_functions dictionary.
  • Append this message of tool calling so that the messages will contain the log that it had asked to call a tool.
  • Then, at last, the function is called, and the response is passed to the model along with all the previous messages(history).
  • The model then understands the response and gives the answer. The response now contains β€œstop” as its finish_reason , thus stopping the program.

Part 6: User interaction

def main():
print("\n\nTo stop, enter 'stop'. ")
while True:
q = input("\nEnter your query :")
if q == "stop":
break
else:
chat(q)

if __name__ == "__main__":
main()

Example

Using β€˜get_current_weather’ tool
Using β€˜multiply’ tool
Using both the tools

That’s it. We have created an AI agent which has access to user defined tools. You can add your own custom functions using the format mentioned above and play around.
Hope you liked the article.

Thank you!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓