Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Providing hints in parameters space to Ax #136

Closed
riyadparvez opened this issue Aug 2, 2019 · 5 comments
Closed

Providing hints in parameters space to Ax #136

riyadparvez opened this issue Aug 2, 2019 · 5 comments
Assignees

Comments

@riyadparvez
Copy link

We can specify range parameters. Is there any way to specify to force Ax to try some specific value of of the parameters in that range. For instance:

parameters=[
          {
            "name": "x1",
            "type": "range",
            "bounds": [-10.0, 10.0],
          },
        ],

What I am looking for is something similar to this:

parameters=[
          {
            "name": "x1",
            "type": "range",
            "bounds": [-10.0, 10.0],
            "must_try": [0.0, 5.0,], # "must_try" just some name
          },
        ],

In above must_try will dictate which parameters Ax must try. It's in a sense giving hints to Ax. Is it possible to do this right now?

@kkashin
Copy link
Contributor

kkashin commented Aug 2, 2019

@riyadparvez - I'm assuming you mean setting certain parameters that you want evaluated during the exploration phase before Bayesian optimization actually kicks in?

In that case, you are able to just attach custom trials to Ax, pass them to your evaluation function, and then report the result back to Ax (if using the service API). You can see an example here.

In the case of a one-dimensional search space like you have above, it would mean:

params1, trial_index1 = ax.attach_trial(parameters={"x1": 0.0}
params2, trial_index2 = ax.attach_trial(parameters={"x1": 0.0}

# run your evaluation here...

ax.complete_trial(trial_index1, [data here])
ax.complete_trial(trial_index2, [data here])

If you have more than one parameter, you will have to manually set the other parameters as well. At this point, we don't have functionality that will tell Ax that you have to try certain values of one parameter in a range while keeping the other parameters completely flexible (at least through the Service API). This is essentially the problem of putting a strong prior on where you want the quasi-random search to go. The closest you could come to that if you have a lot of parameters and don't want to manually specify them is to do something custom via the developer API that would allow you to generate points from multiple search spaces, one that is a broad random search, and one that is a more narrow search. I can show you how to do that if you're interested, but hopefully the custom trials address your needs.

@kkashin kkashin self-assigned this Aug 2, 2019
@riyadparvez
Copy link
Author

Thanks a lot! It worked!
Alos, is it possible to find out which trials are custom trials and which trials have been generated by Ax? I know there is an easier work-around. I was just wondering it'd be nice to have an API from Ax.

@kkashin
Copy link
Contributor

kkashin commented Aug 2, 2019

Awesome!

Not in a very straightforward way at the moment, albeit it's a TODO for us that we're going to roll into the functionality for returning all trials from the Service API @lena-kashtelyan mentioned here: #132.

In the meantime, here's an example of what you can do (I based this off of the Service API tutorial, https://ax.dev/versions/latest/tutorials/gpei_hartmann_service.html):

import numpy as np

from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.ax_client import AxClient
from ax.metrics.branin import branin
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting

ax = AxClient()

ax.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
    ],
    objective_name="hartmann6",
    minimize=True,  # Optional, defaults to False.
)

def evaluate(parameters):
    x = np.array([parameters.get(f"x{i+1}") for i in range(6)])
    # In our case, standard error is 0, since we are computing a synthetic function.
    return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x ** 2).sum()), 0.0)}

# add a custom arm
custom_params, trial_index = ax.attach_trial(parameters={"x1": 0.0, "x2":0.0, "x3":0.0, "x4":1.0, "x5":1.0, "x6": 1.0})
ax.complete_trial(trial_index=trial_index, raw_data=evaluate(custom_params))

for i in range(15):
    print(f"Running trial {i+1}/15...")
    parameters, trial_index = ax.get_next_trial()
     # Local evaluation here can be replaced with deployment to external system.
    ax.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))

# here's how you get the origin of the trials (what model created them)

# model key is None because it's a custom configuration
ax.experiment.trials[0].generator_run._model_key

# model key is 'Sobol' because it's a quasi-random configuration
ax.experiment.trials[1].generator_run._model_key

# model key is 'GPEI' because it was generated using Bayesian optimization
ax.experiment.trials[12].generator_run._model_key

@lena-kashtelyan
Copy link
Contributor

@riyadparvez, did @kkashin's answer fully take care of your issue?

@riyadparvez
Copy link
Author

@lena-kashtelyan yes, it does! Sorry for the late reply! Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants