Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Making Bayesian Optimization Algorithm Simple for Practical Applications
Latest   Machine Learning

Making Bayesian Optimization Algorithm Simple for Practical Applications

Last Updated on July 5, 2024 by Editorial Team

Author(s): Hamid Rasoulian

Originally published on Towards AI.

Image by Author

The Goal of this writing is to show an easy implementation of Bayesian Optimization to solve real-world problems.

Contrary to Machine Learning modeling which the goal is to find a mapping between input and output by utilizing a rather large set of data, in Optimization, defining the exact algorithm inside the Black-Box is not of interest, we do not have the luxury of applying many inputs, Maybe because of the constraints of the process that is too timely or too costly all we are looking for is to find one magical combination of input variables that produces the smallest output and be able to achieve that by examining only a limited number of input values applied to Black-Box.

This problem is prevalent in every discipline, regardless of where you work, you will face this problem where you want to optimize a metric in your process, whether is cost, resources, time to market, quality, reliability, etc., and in all cases, you have few parameters or knobs you can turn in your process, and you want to find out that magical input values that give you the best-optimized output value with the smallest number of trials.

The situation becomes trickier if black-box output may have some local minimum and maybe one large global minimum, and how can we avoid being trapped in one of those local minimums and missing the largest global minimum?

In this paper, we show how the Bayesian Optimization algorithm, In conjunction with data coming from the field, can work together to discover the optimum point for the process.

You might be sitting at your computer and running a Bayesian Optimization Algorithm, while the physical Black-Box might be sitting in a Lab at some distance. You act as a middleman, talking to both sides. For the Algorithm, we use the SKOPT package of SciKit-learn. You can install this open-source package using the command:

pip install scikit-optimize

scikit-optimize

Sequential model-based optimization toolbox.

pypi.org

The heart of the Algorithm is a Gaussian Process called gp_minimize; for simplicity, Let’s call this magical function β€œAI Genie,” and You are acting in between this AI-Genie, which is running in your PC, and your physical Black box. The goal of the AI-Genie is to find the minimum output of the black box with as small a number of trials as possible. Also, to make it even simpler, assume that we have only one input in the black box; this process could easily be expanded to a multi-input case. The Picture below shows all the characters in this process:

Image by Author

Here is the actual code:

import numpy as np
from skopt import gp_minimize
from skopt.space import Real
from skopt.utils import use_named_args
import matplotlib.pyplot as plt

# Define the search space (let's assume we're searching within -100 to 100)
search_space = [Real(-100, 100, name='X')]

# Objective function that interacts with the user
@use_named_args(search_space)
def objective_function(X):
# Print the value of X
print(f"Enter this value into the black box: X={X}", flush=True)

# Ask the user to input the corresponding Y value from the black box
Y = float(input("Enter the value returned by the black box (Y): "))

# Return the Y value as the result of the objective function
return Y

# Perform Bayesian Optimization
result = gp_minimize(objective_function, search_space, n_calls=15, random_state=0)

# Print the result
print(f"Optimal value found: X = {result.x[0]}")
print(f"Minimum value of the function: Y = {result.fun}")

# Plot the convergence
plt.plot(result.func_vals)
plt.xlabel('Number of calls')
plt.ylabel('Function value')
plt.title('Convergence Plot')
plt.show()

Let’s examine the code in more detail:

1- Import required libraries

import numpy as np
from skopt import gp_minimize
from skopt.space import Real
from skopt.utils import use_named_args
import matplotlib.pyplot as plt

gp_minimize is the main function driving the optimization process

for input parameters to the black box, you can have Integer, Real, and Category. Here, we assume we have just one Real value input

β€œuse_name_args” is a decorator supplied in SKOPT; its job is to select different values of input parameters and send them to be processed in Black-Box

2- Define search space

# Define the search space (let's assume we're searching within -100 to 100)
search_space = [Real(-100, 100, name='X')]

Offers the system a range of valid values that input can take. For example, here we have one input called β€œX” which can take a float value between -100 to 100

3- Black-Box Representation

# Objective function that interacts with the user
@use_named_args(search_space)
def objective_function(X):
# Print the value of X
print(f"Enter this value into the black box: X={X}", flush=True)

# Ask the user to input the corresponding Y value from the black box
Y = float(input("Enter the value returned by the black box (Y): "))

# Return the Y value as the result of the objective function
return Y

Objective function is the Function representing the Black-Box functionality. The Black-Box is inside the Objective function and receives the input values given to the Objective function from Search Space; the black box accepts that value, it processes the input, and provides the output to objective function, which will then be returned to the optimizing algorithm.

What makes this paper different is that we are acting like a black box inside the Objective function; we get the parameter passed to it, by printing that input value.

Then we pause the program to take that input back to the lab and give it to the physical or virtual black box, get the output of the black box, and then come back to the objective function, which was holding the execution to receive the value we enter as the output coming from the black- box.

and finally, Return the value to the Optimizer and wait for the next input from optimizer.

4- Main Bayesian Optimizer function

# Perform Bayesian Optimization
result = gp_minimize(objective_function, search_space, n_calls=15, random_state=0)

This is the heart of the algorithm, which we call Ai-Genie; the First parameter for this function is the Objective-function (which holds the black Box inside), the next parameter is Search_Space, the next parameter is n_calls which the user choose to limit the number of trials, here user is asking the Ai-Genie to provide the minimum value of the output of black box within 15 trials, and last parameter is random_state to initialize the random state.

5- Printing the results

# Print the result
print(f"Optimal value found: X = {result.x[0]}")
print(f"Minimum value of the function: Y = {result.fun}")

This will print the minimum value out of the black box (Y) and the input value (X), which will get you the minimum output.

Execution

Assume you have set everything and are ready to run the experiment; you have no idea what is inside the black box. You just know for any input you give it, it provides you an out put so let’s start the experiment:

1- The First number the optimizer model give you is: 18.568924; the optimizer picks this very first number at random form the range of available input variables.

2- Take this number to the black box, enter it, and wait for the output, The black box returns: 363.373849

3- Take this out, put back to Optimizer, and enter it, wait for Optimizer to provide you with the next number: 68.853150

4- You have finished one round; continue this process till you exhaust the number of trial n_call.

Here X is the number suggested by the Ai-Genie to try on Black Box, and

Y is the output from Black-Box

The final result is given below:

Optimal value found: X = -0.49669415594226507

The minimum value of the function: Y = -0.24998907139506593

Let’s plot convergence

# Plot the convergence
plt.plot(result.func_vals)
plt.xlabel('Number of calls')
plt.ylabel('Function value')
plt.title('Convergence Plot')
plt.show()
Image by Author

Notice in a range of -100 to 100, there are an infinite number of float values that Ai-Genie could choose from, but Ai-Genie is so awesome that after testing a few values, it almost knows what the minimum value is after only 10 trials.

Verification

Now that the experiment is concluded, How do I know that the Ai-genie really found the optimum value, and how do I verify it.

In real-world situations, we absolutely do not know what is inside the black box, and we also do not want to know, we are interested just in minimum output, but here just to test the accuracy of the Ai-genie in finding the optimum value, I did not expose this to Ai-genie but I went to black box in the lab, and placed a function that I know inside of it, the function I placed there was :

Y = X**2 + X

We can find the minimum value of this function using Differential equation and set it qual to zero and solve it.

dY/dX = 2X + 1

2X +1 = 0

X = -0.5 , Y = -0.25

The values the Bayesian Optimization found without knowing this equation were extremely close, which verifies the power of the algorithm.

This is what makes the Bayesian Optimization algorithm so powerful. We should seriously consider using it more often to find optimal points for any process wherever possible.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓