Published on Saturday 21 September 2019
Using Hyperopt in Python to minimize any function
Hyperopt can be used to optimize black-box functions through Bayesian Optimisation, which is a better approach compared to a purely random guessing.
Why to use it?
If you have any kind of function returning real values, it could not be possible to apply classical derivative approaches to search for global minima (or maxima). For this kind of problems, there are different solutions and implementation.
Unlike random automated parameter tuning approaches, Bayesian Optimisation methods aim to choose next hyperparameter values according to past knowledge.
Bayesian optimization is a sequential design strategy for global optimization of black-box functions that doesn't require derivatives. (Wikipedia)
Hyperopt is a Python implementation using the Bayesian optimization approach.
Example
A more readable and complete explanation (with plots!) of the Python code is available on this html page, which can also be found in my Github repository as Jupyter Notebook and PDF.
A minimal python code is shown in the following snippet, but I suggest you to check the above mentioned html page first.
import numpy as np
from hyperopt import fmin, hp, tpe
def my_fcn(x):
return np.sin(x[0]*(x[1]**2-x[2])/x[3])*np.cos(x[0])
x_mins_dict = fmin(
fn=my_fcn,
space=[hp.uniform('x_1', -100, 100), # search range for x[0] from -100 to 100
hp.uniform('x_2', -200, 100), # search range for x[1] from -200 to 100
hp.uniform('x_3', 0, 50), # search range for x[2] from 0 to 50
hp.uniform('x_4', -100, -20) # search range for x[3] from -100 to -20
],
algo=tpe.suggest,
max_evals=500 # stop searching after 500 iterations
)