Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

The LLM Series #3: Multiple Function Calling: Taking OpenAI Models to the Next Level
Artificial Intelligence   Latest   Machine Learning

The LLM Series #3: Multiple Function Calling: Taking OpenAI Models to the Next Level

Author(s): Muhammad Saad Uddin

Originally published on Towards AI.

Image by Author via Dall-E

Welcome to the third article of this LLM Series, where we’re stoking the flames of innovation with OpenAI models U+1F525U+1F525. In this edition, we’ll be exploring multiple-function calling, a technique designed to supercharge the efficiency and capabilities of Language Learning Models. We’re going to dive deep into how this advanced technique is taking OpenAI models to the next level, bolstering their ability to perform tasks at an unparalleled speed. It’s about pushing boundaries and making LLMs not only more proficient but also blazingly fast. So buckle up and remember, while AI might be the new fire, there will be no smoke here — just sparking insights set to ignite your understanding U+1F4A1.

Just like in my preceding articles, we’re going to use Azure OpenAI once again. You can find a detailed explanation for my choice of using Azure OpenAI in the previous article of this series here. We will start by updating endpoint details.

openai.api_type = "azure"
openai.api_version = # API key Version
openai.api_base = # Azure OpenAI resource's endpoint value
openai.api_key = # Azure OpenAI api key

We will start by defining some fundamental functions, read_sales_data and read_employee_data which read simple csv files and serve as the foundation for our main functions, which we’ll then utilize in subsequent function calls.

def read_sales_data():
df = pd.read_csv('sample_data_sales.csv', sep=';')
return df

def read_employee_data():
df1 = pd.read_csv('sample_data_employee.csv', sep=';')
return df1

we’ve upgraded the calculate_sale function from my last article by introducing ‘year’ as a filter parameter. This modification is added to showcases the advanced capabilities of the OpenAI GPT-4 model.

For context of what we’ve done previously read my article: The LLM Series #2: Function Calling in OpenAI Models

def calculate_sales(product: str, year: int = None): 
df = read_sales_data()

vals = list(df['Product Name'].unique())

if product not in vals:
return f"Invalid Product Name. Please choose from: {vals}"

if year:
df_filter = df[(df['Product Name'] == product) & (df['Year'] == year)]
else:
df_filter = df[df['Product Name'] == product]

# Convert df to dictionary
df_dict = df_filter.to_dict('records')

return json.dumps(df_dict)

Additionally, I’ve introduced thecalculate_employee_sale function that allows for data filtering based on product, year, and employee number or any individual specification from these. While there’s room to expand this function’s versatility by adding more parameters or choices, I’ve opted to keep things simple and straightforward. This approach aims to demonstrate OpenAI’s capabilities while leaving room for your creativity to customize it further according to your specific use cases and requirements

def calculate_employee_sales(product: str, year: int = None, year_condition: str = None, employee_number: int = None): 
df1 = read_employee_data()

vals = list(df1['Product Name'].unique())

if product not in vals:
return f"Invalid Product Name. Please choose from: {vals}"

#filter product
df_filter = df1[df1['Product Name'] == product]

#filter year
if year and year_condition == 'after':
df_filter = df_filter[df_filter['Year'] >= year]
elif year and year_condition == 'before':
df_filter = df_filter[df_filter['Year'] <= year]

# filter employee_number
if employee_number:
df_filter = df_filter[df_filter['Employee Number'] == employee_number]

df_dict = df_filter.to_dict('records')

return json.dumps(df_dict)

Next up, we explicitly define a JSON format that OpenAI models need to recognize the functions available for them to call. We provide a logical name for the function and a description of what the function does. It’s crucial to make this description as simple and comprehensive as possible, as a vague description might lead to misinterpretation by the model. We also define inputs for the model and their type whether it’s a string, integer, float or another datatype. If you want to limit choices, an enum can be useful in recognizing the correct keyword in user queries. we define all this in function_options and function_to_use is for runtime script when we extract and call the function.

The “anyOf” option in the JSON schema is a highly potent tool when you need to maintain optional parameters for your model. Like, if a user doesn’t request output based on ‘year’, this parameter can be set as ‘None’, and the function will solely be invoked considering the product and vice versa.

This kind of flexibility allows users to interact with your models more intuitively as they don’t have to provide input for all available parameters necessarily. Instead, they can choose what’s relevant to them, enhancing overall user experience. Additionally, using “anyOf” could simplify how you manage scenarios with multiple possibilities since it gives leeway for selective parameter involvement without impacting core functionality. It effectively reduces unnecessary complexity while maintaining dynamic interactions between users and models.

function_options = [
{
"name": "calculate_sales",
"description": "Get the sales data for a given product name and year",
"parameters": {
"type": "object",
"properties": {
"product": {
"type": "string",
"enum": ['product A','product B','product C','product D','NOT LISTED']},
"year": {
"anyOf": [
{
"type": "integer",
"enum": [2019, 2020, 2021, 2022, 2023]
},
{
"type": "null"
}
]

}
},
"required": ["product"],
},
},
{
"name": "calculate_employee_sales",
"description": "Get the employee sales data for a given product name and optionally for year and or employee number",
"parameters": {
"type": "object",
"properties": {
"product": {
"type": "string",
"enum": ['product A','product B','product C','product D','NOT LISTED']},
"year": {
"anyOf": [
{
"type": "integer",
"enum": [2019, 2020, 2021, 2022, 2023]
},
{
"type": "null"
}
]

},
"year_condition": {
"anyOf": [
{
"type": "string",
"enum": ['before', 'after']
},
{
"type": "null"
}
]

},
"employee_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
]

}
},
"required": ["product"],
},
}
]

functions_to_use = {
"calculate_sales": calculate_sales,
"calculate_employee_sales": calculate_employee_sales
}

Designing a prompt is an art in itself, as a well-designed prompt can significantly improve results and minimize unusual behaviors. As long we have to cater hallucinations in LLMs, this will remain a critical factor in any application designed to incorporate LLMs. A well-thought-out prompt not only guides the AI’s response but also influences the accuracy and relevance of the information provided. It essentially acts as a steering wheel for AI responses, emphasizing why investing time and effort into designing effective prompts is worthwhile.

I strongly recommend everyone to read a recent paper about prompts, which can be incredibly beneficial in creating effective prompts.

We then thoughtfully designed our system prompt carefully to ensure minimal hallucinations or information leakage and also created a sample user query for input into our setup.

system_prompt = """You are an expert sales bot which provides deep dive insights and analysis related to products. \
You will be given some information about product as context and you will analyze the given data and only answer to queries when context or data \
about the specific product asked is given else response with: :I don't have enough knowledge about these products please contact a sales rep:. """


user_query = """how many product A were sold in year 2020"""

Building upon the components we’ve created, we now define our API call, which runs primarily via openai.ChatCompletion.createwhich require the name of the model, an input message that includes the system message, context, and user query, along with the JSON schema of functions dictating how the model interacts with these particular tasks.

I have set function_call to ‘auto’ here granting the model autonomy to choose between generating a message or calling a function.

input_message = [{"role": "system", "content": f"{system_prompt}"},
{"role": "user", "content": f"{user_query}"}]
response = openai.ChatCompletion.create(
engine="gpt-4",
messages=input_message,
functions=function_options,
function_call='auto'
)
model_response = response["choices"][0]["message"]
print(model_response)

For our test case, we execute a basic query with the prompt and function information. Here’s what the model responded with:

{
"role": "assistant",
"function_call": {
"name": "calculate_sales",
"arguments": "{\n\"product\": \"product A\",\n\"year\": \"2020\"\n}"
}
}

So, based on the sample question, the model recognizes its capability to respond by invoking a particular function. Initially, it replies with what it deems to be the most suitable function name for handling this query and provides the necessary input parameters for that specific function.

However, before going into details of what to do with this output, let's see how many tokens we have used:

response['usage']
<OpenAIObject at 0x> JSON: {
"prompt_tokens": 295,
"completion_tokens": 22,
"total_tokens": 317
}

Now, for the base query, let's see how the model responds; we use the below code to get the function details from the call and run it to generate output.

function_name = model_response["function_call"]["name"]

function_to_call = functions_to_use[function_name]

function_args = json.loads(model_response["function_call"]["arguments"])
if function_args.get('year'):
function_args['year'] = int(function_args['year'])
function_response = function_to_call(**function_args)

print(function_name)
print(function_to_call)
print(function_args)
print("Output of function call:")
print(function_response)
print()

Drawing upon the repertoire of two functions within our model’s inventory, it was successfully able to understand user queries and then accurately select and apply the appropriate function and parameters to generate a response. This swiftly demonstrates how sophisticated OpenAI models (or LLMs in general) can be in interpreting human language, correctly choosing from their range of knowledge and resources for an accurate reply. It also underscores how crucial parameter selection is for tailoring each response in accordance with the specifics of individual user queries.

calculate_sales
<function calculate_sales at 0x0000027448A10940>
{'product': 'product A', 'year': 2020}
Output of function call:
[{"Product Name": "product A", "Year": 2020, "Sales": 659}]

Now, let’s define a query that will theoretically activate both functions we have in our list i.e. a multi-function call

user_query = """how many product A were sold in year 2020 and give me top 3 employee which have most sales for product A in 2020"""

and now, instead of reviewing the output step by step (which we conducted last time to gain deeper insights into our pipeline for function calling). We will devise a script to do a complete call until the model determines that it no longer requires further function invocation.

you can look breakdown of step by step detail of this script in previous article here

Here is the full script for the call:

input_message = [{"role": "system", "content": f"{system_prompt}"},
{"role": "user", "content": f"{user_query}"}]
response = openai.ChatCompletion.create(
engine="gpt-4",
messages=input_message,
functions=function_options,
function_call='auto'
)

while response["choices"][0]["finish_reason"] == 'function_call':
model_response = response["choices"][0]["message"]
print(model_response)
print("Function call Recommended by Model:")
print(model_response.get("function_call"))
print()

function_name = model_response["function_call"]["name"]

function_to_call = functions_to_use[function_name]

function_args = json.loads(model_response["function_call"]["arguments"])
if function_args.get('year'):
function_args['year'] = int(function_args['year'])
function_response = function_to_call(**function_args)

print("Output of function call:")
print(function_response)
print()


input_message.append(
{
"role": model_response["role"],
"function_call": {
"name": model_response["function_call"]["name"],
"arguments": model_response["function_call"]["arguments"],
},
"content": None
}
)

input_message.append(
{
"role": "function",
"name": function_name,
"content": function_response,
}
)

print("Messages before next request:")
print()
for message in input_message:
print(message)
print()

response = openai.ChatCompletion.create(
engine="gpt-4",
messages=input_message,
functions=function_options,
function_call='auto'
)
model_response_2 = response["choices"][0]["message"]
print(model_response_2)

In this script we maintain a while loop that runs until the model determines it no longer needs to invoke any function. During this process, each function call and its corresponding output are logged. These outputs are then appended to the input message, serving as context for the final query. Once all function calls have been executed by the model, these log records detailing both output and history of called functions are fed back into our model. This valuable information helps in responding accurately to user queries.

In our specific case, after executing the while loop, we obtain an output as follows:

print(model_response_2['content'])
In 2020, 659 units of product A were sold.

The top 3 employees with most sales for product A in 2020 are:

1. Employee 4 with 70 units sold.
2. Employee 8 with 67 units sold.
3. Employee 11 with 64 units sold.

Sophisticated isn’t it? Now, you have the opportunity to experiment with this more creatively and also with more expanded and complex functions in inventory.

Also, let's have a look up at the complete output of this multifunction calling frenzy:

{
"role": "assistant",
"function_call": {
"name": "calculate_sales",
"arguments": "{\n\"product\": \"product A\",\n\"year\": \"2020\"\n}"
}
}
Function call Recommended by Model:
{
"name": "calculate_sales",
"arguments": "{\n\"product\": \"product A\",\n\"year\": \"2020\"\n}"
}

Output of function call:
[{"Product Name": "product A", "Year": 2020, "Sales": 659}]

Messages before next request:

{'role': 'system', 'content': "You are an expert sales bot which provides deep dive insights and analysis related to products. You will be given some information about product as context and you will analyze the given data and only answer to queries when context or data about the specific product asked is given else response with: :I don't have enough knowledge about these products please contact a sales rep:. "}
{'role': 'user', 'content': 'how many product A were sold in year 2020 and give me top 3 employee which have most sales for product A in 2020'}
{'role': 'assistant', 'function_call': {'name': 'calculate_sales', 'arguments': '{\n"product": "product A",\n"year": "2020"\n}'}, 'content': None}
{'role': 'function', 'name': 'calculate_sales', 'content': '[{"Product Name": "product A", "Year": 2020, "Sales": 659}]'}

{
"role": "assistant",
"content": "In the year 2020, a total of 659 units of Product A were sold. \n\nNow, let's see the top 3 employees who had the most sales for Product A in 2020.",
"function_call": {
"name": "calculate_employee_sales",
"arguments": "{\n \"product\": \"product A\",\n \"year\": \"2020\"\n}"
}
}
{
"role": "assistant",
"content": "In the year 2020, a total of 659 units of Product A were sold. \n\nNow, let's see the top 3 employees who had the most sales for Product A in 2020.",
"function_call": {
"name": "calculate_employee_sales",
"arguments": "{\n \"product\": \"product A\",\n \"year\": \"2020\"\n}"
}
}
Function call Recommended by Model:
{
"name": "calculate_employee_sales",
"arguments": "{\n\"product\": \"product A\",\n\"year\": \"2020\"\n}"
}

Output of function call:
[{"Employee Number": 1, "Product Name": "product A", "Year": 2019, "Sales": 65}, {"Employee Number": 2, "Product Name": "product A", "Year": 2019, "Sales": 66}, {"Employee Number": 3, "Product Name": "product A", "Year": 2019, "Sales": 63}, {"Employee Number": 4, "Product Name": "product A", "Year": 2019, "Sales": 58}, {"Employee Number": 5, "Product Name": "product A", "Year": 2019, "Sales": 68}, {"Employee Number": 6, "Product Name": "product A", "Year": 2019, "Sales": 63}, {"Employee Number": 7, "Product Name": "product A", "Year": 2019, "Sales": 63}, {"Employee Number": 8, "Product Name": "product A", "Year": 2019, "Sales": 71}, {"Employee Number": 9, "Product Name": "product A", "Year": 2019, "Sales": 73}, {"Employee Number": 10, "Product Name": "product A", "Year": 2019, "Sales": 66}, {"Employee Number": 11, "Product Name": "product A", "Year": 2019, "Sales": 77}, {"Employee Number": 12, "Product Name": "product A", "Year": 2019, "Sales": 51}, {"Employee Number": 1, "Product Name": "product A", "Year": 2020, "Sales": 46}, {"Employee Number": 2, "Product Name": "product A", "Year": 2020, "Sales": 55}, {"Employee Number": 3, "Product Name": "product A", "Year": 2020, "Sales": 49}, {"Employee Number": 4, "Product Name": "product A", "Year": 2020, "Sales": 70}, {"Employee Number": 5, "Product Name": "product A", "Year": 2020, "Sales": 51}, {"Employee Number": 6, "Product Name": "product A", "Year": 2020, "Sales": 56}, {"Employee Number": 7, "Product Name": "product A", "Year": 2020, "Sales": 54}, {"Employee Number": 8, "Product Name": "product A", "Year": 2020, "Sales": 67}, {"Employee Number": 9, "Product Name": "product A", "Year": 2020, "Sales": 47}, {"Employee Number": 10, "Product Name": "product A", "Year": 2020, "Sales": 42}, {"Employee Number": 11, "Product Name": "product A", "Year": 2020, "Sales": 64}, {"Employee Number": 12, "Product Name": "product A", "Year": 2020, "Sales": 58}, {"Employee Number": 1, "Product Name": "product A", "Year": 2021, "Sales": 55}, {"Employee Number": 2, "Product Name": "product A", "Year": 2021, "Sales": 48}, {"Employee Number": 3, "Product Name": "product A", "Year": 2021, "Sales": 61}, {"Employee Number": 4, "Product Name": "product A", "Year": 2021, "Sales": 47}, {"Employee Number": 5, "Product Name": "product A", "Year": 2021, "Sales": 63}, {"Employee Number": 6, "Product Name": "product A", "Year": 2021, "Sales": 55}, {"Employee Number": 7, "Product Name": "product A", "Year": 2021, "Sales": 68}, {"Employee Number": 8, "Product Name": "product A", "Year": 2021, "Sales": 58}, {"Employee Number": 9, "Product Name": "product A", "Year": 2021, "Sales": 75}, {"Employee Number": 10, "Product Name": "product A", "Year": 2021, "Sales": 68}, {"Employee Number": 11, "Product Name": "product A", "Year": 2021, "Sales": 62}, {"Employee Number": 12, "Product Name": "product A", "Year": 2021, "Sales": 69}, {"Employee Number": 1, "Product Name": "product A", "Year": 2022, "Sales": 30}, {"Employee Number": 2, "Product Name": "product A", "Year": 2022, "Sales": 25}, {"Employee Number": 3, "Product Name": "product A", "Year": 2022, "Sales": 29}, {"Employee Number": 4, "Product Name": "product A", "Year": 2022, "Sales": 25}, {"Employee Number": 5, "Product Name": "product A", "Year": 2022, "Sales": 22}, {"Employee Number": 6, "Product Name": "product A", "Year": 2022, "Sales": 32}, {"Employee Number": 7, "Product Name": "product A", "Year": 2022, "Sales": 16}, {"Employee Number": 8, "Product Name": "product A", "Year": 2022, "Sales": 26}, {"Employee Number": 9, "Product Name": "product A", "Year": 2022, "Sales": 21}, {"Employee Number": 10, "Product Name": "product A", "Year": 2022, "Sales": 24}, {"Employee Number": 11, "Product Name": "product A", "Year": 2022, "Sales": 23}, {"Employee Number": 12, "Product Name": "product A", "Year": 2022, "Sales": 19}, {"Employee Number": 1, "Product Name": "product A", "Year": 2023, "Sales": 72}, {"Employee Number": 2, "Product Name": "product A", "Year": 2023, "Sales": 65}, {"Employee Number": 3, "Product Name": "product A", "Year": 2023, "Sales": 81}, {"Employee Number": 4, "Product Name": "product A", "Year": 2023, "Sales": 86}, {"Employee Number": 5, "Product Name": "product A", "Year": 2023, "Sales": 79}, {"Employee Number": 6, "Product Name": "product A", "Year": 2023, "Sales": 93}, {"Employee Number": 7, "Product Name": "product A", "Year": 2023, "Sales": 74}, {"Employee Number": 8, "Product Name": "product A", "Year": 2023, "Sales": 67}, {"Employee Number": 9, "Product Name": "product A", "Year": 2023, "Sales": 98}, {"Employee Number": 10, "Product Name": "product A", "Year": 2023, "Sales": 77}, {"Employee Number": 11, "Product Name": "product A", "Year": 2023, "Sales": 80}, {"Employee Number": 12, "Product Name": "product A", "Year": 2023, "Sales": 63}]

Messages before next request:

{'role': 'system', 'content': "You are an expert sales bot which provides deep dive insights and analysis related to products. You will be given some information about product as context and you will analyze the given data and only answer to queries when context or data about the specific product asked is given else response with: :I don't have enough knowledge about these products please contact a sales rep:. "}
{'role': 'user', 'content': 'how many product A were sold in year 2020 and give me top 3 employee which have most sales for product A in 2020'}
{'role': 'assistant', 'function_call': {'name': 'calculate_sales', 'arguments': '{\n"product": "product A",\n"year": "2020"\n}'}, 'content': None}
{'role': 'function', 'name': 'calculate_sales', 'content': '[{"Product Name": "product A", "Year": 2020, "Sales": 659}]'}
{'role': 'assistant', 'function_call': {'name': 'calculate_employee_sales', 'arguments': '{\n"product": "product A",\n"year": "2020"\n}'}, 'content': None}
{'role': 'function', 'name': 'calculate_employee_sales', 'content': '[{"Employee Number": 1, "Product Name": "product A", "Year": 2019, "Sales": 65}, {"Employee Number": 2, "Product Name": "product A", "Year": 2019, "Sales": 66}, {"Employee Number": 3, "Product Name": "product A", "Year": 2019, "Sales": 63}, {"Employee Number": 4, "Product Name": "product A", "Year": 2019, "Sales": 58}, {"Employee Number": 5, "Product Name": "product A", "Year": 2019, "Sales": 68}, {"Employee Number": 6, "Product Name": "product A", "Year": 2019, "Sales": 63}, {"Employee Number": 7, "Product Name": "product A", "Year": 2019, "Sales": 63}, {"Employee Number": 8, "Product Name": "product A", "Year": 2019, "Sales": 71}, {"Employee Number": 9, "Product Name": "product A", "Year": 2019, "Sales": 73}, {"Employee Number": 10, "Product Name": "product A", "Year": 2019, "Sales": 66}, {"Employee Number": 11, "Product Name": "product A", "Year": 2019, "Sales": 77}, {"Employee Number": 12, "Product Name": "product A", "Year": 2019, "Sales": 51}, {"Employee Number": 1, "Product Name": "product A", "Year": 2020, "Sales": 46}, {"Employee Number": 2, "Product Name": "product A", "Year": 2020, "Sales": 55}, {"Employee Number": 3, "Product Name": "product A", "Year": 2020, "Sales": 49}, {"Employee Number": 4, "Product Name": "product A", "Year": 2020, "Sales": 70}, {"Employee Number": 5, "Product Name": "product A", "Year": 2020, "Sales": 51}, {"Employee Number": 6, "Product Name": "product A", "Year": 2020, "Sales": 56}, {"Employee Number": 7, "Product Name": "product A", "Year": 2020, "Sales": 54}, {"Employee Number": 8, "Product Name": "product A", "Year": 2020, "Sales": 67}, {"Employee Number": 9, "Product Name": "product A", "Year": 2020, "Sales": 47}, {"Employee Number": 10, "Product Name": "product A", "Year": 2020, "Sales": 42}, {"Employee Number": 11, "Product Name": "product A", "Year": 2020, "Sales": 64}, {"Employee Number": 12, "Product Name": "product A", "Year": 2020, "Sales": 58}, {"Employee Number": 1, "Product Name": "product A", "Year": 2021, "Sales": 55}, {"Employee Number": 2, "Product Name": "product A", "Year": 2021, "Sales": 48}, {"Employee Number": 3, "Product Name": "product A", "Year": 2021, "Sales": 61}, {"Employee Number": 4, "Product Name": "product A", "Year": 2021, "Sales": 47}, {"Employee Number": 5, "Product Name": "product A", "Year": 2021, "Sales": 63}, {"Employee Number": 6, "Product Name": "product A", "Year": 2021, "Sales": 55}, {"Employee Number": 7, "Product Name": "product A", "Year": 2021, "Sales": 68}, {"Employee Number": 8, "Product Name": "product A", "Year": 2021, "Sales": 58}, {"Employee Number": 9, "Product Name": "product A", "Year": 2021, "Sales": 75}, {"Employee Number": 10, "Product Name": "product A", "Year": 2021, "Sales": 68}, {"Employee Number": 11, "Product Name": "product A", "Year": 2021, "Sales": 62}, {"Employee Number": 12, "Product Name": "product A", "Year": 2021, "Sales": 69}, {"Employee Number": 1, "Product Name": "product A", "Year": 2022, "Sales": 30}, {"Employee Number": 2, "Product Name": "product A", "Year": 2022, "Sales": 25}, {"Employee Number": 3, "Product Name": "product A", "Year": 2022, "Sales": 29}, {"Employee Number": 4, "Product Name": "product A", "Year": 2022, "Sales": 25}, {"Employee Number": 5, "Product Name": "product A", "Year": 2022, "Sales": 22}, {"Employee Number": 6, "Product Name": "product A", "Year": 2022, "Sales": 32}, {"Employee Number": 7, "Product Name": "product A", "Year": 2022, "Sales": 16}, {"Employee Number": 8, "Product Name": "product A", "Year": 2022, "Sales": 26}, {"Employee Number": 9, "Product Name": "product A", "Year": 2022, "Sales": 21}, {"Employee Number": 10, "Product Name": "product A", "Year": 2022, "Sales": 24}, {"Employee Number": 11, "Product Name": "product A", "Year": 2022, "Sales": 23}, {"Employee Number": 12, "Product Name": "product A", "Year": 2022, "Sales": 19}, {"Employee Number": 1, "Product Name": "product A", "Year": 2023, "Sales": 72}, {"Employee Number": 2, "Product Name": "product A", "Year": 2023, "Sales": 65}, {"Employee Number": 3, "Product Name": "product A", "Year": 2023, "Sales": 81}, {"Employee Number": 4, "Product Name": "product A", "Year": 2023, "Sales": 86}, {"Employee Number": 5, "Product Name": "product A", "Year": 2023, "Sales": 79}, {"Employee Number": 6, "Product Name": "product A", "Year": 2023, "Sales": 93}, {"Employee Number": 7, "Product Name": "product A", "Year": 2023, "Sales": 74}, {"Employee Number": 8, "Product Name": "product A", "Year": 2023, "Sales": 67}, {"Employee Number": 9, "Product Name": "product A", "Year": 2023, "Sales": 98}, {"Employee Number": 10, "Product Name": "product A", "Year": 2023, "Sales": 77}, {"Employee Number": 11, "Product Name": "product A", "Year": 2023, "Sales": 80}, {"Employee Number": 12, "Product Name": "product A", "Year": 2023, "Sales": 63}]'}

{
"role": "assistant",
"content": "A total of 659 units of product A were sold in 2020.\n\nTop 3 employees who sold Product A the most in 2020 are:\n\n1. Employee Number 4 with 70 sales.\n2. Employee Number 8 with 67 sales.\n3. Employee Number 11 with 64 sales."
}

Now, if you focus on output after 1st “Messages before next request:” output, you’ll notice something intriguing. Specifically, the image below highlights how OpenAI models possess self-learning capabilities.

Once these models generate an initial output, they understand that more information is necessary before providing a response to a user query. Consequently, they proceed by invoking the the model again and again within their framework. This process can be viewed as a chain of thought learning mechanism. This characteristic not only demonstrates their intelligence but also indicates their potential for handling increasingly complex tasks while constantly improving accuracy and relevancy based on learned experiences.

response['usage']

The usage is comparatively high than single function call model, this increase is due to the fact that we are invoking multiple functions and appending all history for contextual understanding by the model.

<OpenAIObject at 0x> JSON: {
"prompt_tokens": 2078,
"completion_tokens": 66,
"total_tokens": 2144
}

Here’s an interesting learning scenario to consider. What do you think would happen if we altered the query in such a way that it doesn’t align with our functions? Can you take a guess at the outcome? How many times do you reckon the function will be invoked under these circumstances?

This example is not only intriguing, but it also provides valuable insight into how adaptable and flexible AI models can be when faced with unexpected or deviating inputs.

user_query = """Tell me top sold product in year 2020 and 2023 and give me top 1 employee which have most sales 2021"""

Allow me to unveil the mystery. This process will activate numerous iterations of while loops, and if we were dealing with a dataset of considerable size, it could potentially result in an immense number of function calls.

Given that the model’s output might be too vast to include in this article, let’s focus our attention on the token usage instead. It provides key insight into how resources are utilized during this operation and can help us understand its efficiency or identify areas for improvement.

print(response['usage'])
{
"prompt_tokens": 7604,
"completion_tokens": 57,
"total_tokens": 7661
}

As you can see, our current set of functions wasn’t designed to handle such queries. A slight deviation in the query that requires more data, or even a dataset with specific parameters or size, could overwhelm our API call. This would likely result in an error due to exceeding the token limit. This demonstrates the importance of building a robust and flexible multi-calling structure capable of handling diverse inputs without compromising performance or functionality. It also underscores why it’s crucial to consider potential edge cases during the development process.

Please note that, this is very important point to consider when designing your functions for such projects as token limit is currently a challenge.

However, the new GPT-4 Turbo model provides a more generous token limit (128k Input) that simplifies many use cases and makes them easier to implement. But meanwhile, if you are utilizing older versions I’d recommend you to devise strategies on how best to define your functions. Carefully consider what limits should be set for user queries in order not only to ensure optimal performance but also prevent unnecessary resource usage. In addition, think about when it would be appropriate to terminate calls or sessions based on these predefined limits. Incorporating a feedback mechanism is another valuable strategy; this allows users the opportunity to refine their queries within a production environment based on real-time information and response from the system.

There is also some good news in this context as newer version of GPT4 models are capable of executing parallel function calls. This feature eliminates the need for repetitive API calling to retrieve all relevant context. Instead, a single call can trigger all necessary functions. This enhancement significantly increases the efficiency and effectiveness of OpenAI models in handling data-centric use cases. It streamlines processes and allows quicker access to information, making these models an even more crucial tool in the developer’s arsenal. The ability to perform parallel function calls not only saves time but also computational resources, which is critical when operating at scale. This capability amplifies the potential power that developers can harness from these AI tools enabling them to create robust and scalable solutions with ease.

You can find more details here. At the time of penning this article, I have yet to update my gpt-4 model version. Therefore, I will share a script that you can execute directly to implement parallel calling and enhance your applications or workflows.

Significantly, there are two key updates from the previous API version that users need to be aware of. Firstly, it is now necessary to invoke a client for initiating requests. Secondly, instead of ‘functions’, we now work with tools and tool_choicein our operations.

from openai import OpenAI
client = OpenAI()


input_message = [{"role": "system", "content": f"{system_prompt}"},
{"role": "user", "content": f"{user_query}"}]

completion = client.chat.completions.create(
model="gpt-4",
messages=input_message,
tools=function_options,
tool_choice="auto"
)

print(completion)

It’s your time to shine! Utilize the knowledge you’ve recently acquired to automate tasks and take efficiency to a new level. I eagerly anticipate hearing about your achievements, so feel free to share them in the comments section. Also, If you haven’t had a chance yet to read the previous article of this LLM series yet, don’t worry — here’s the link for you!

That’s it for today, But rest assured, our journey is far from over! In the next chapter of the LLM series, we will develop the RAG application, which is coming out as the de facto leader for utilizing data outside of the knowledge base of LLMs. If this guide has sparked your curiosity and you are keen on exploring more intriguing projects within this LLM series, make sure to follow me. With each new project, I promise a journey filled with learning, creativity, and fun. Furthermore:

Delighted by the above piece? these additional recommendations will surely pique your interest:

The LLM Series #2: Function Calling in OpenAI Models: A Practical Guide

Engineering LLMs for Analytics

pub.towardsai.net

The LLM Series #1: The Fast Track to Fine-Tuning Mastery with Azure

Hassle-free with minimal coding

pub.towardsai.net

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->