Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_countour_plot() not plotting all trials #2221

Open
davifebba opened this issue Feb 23, 2024 · 4 comments
Open

get_countour_plot() not plotting all trials #2221

davifebba opened this issue Feb 23, 2024 · 4 comments
Assignees
Labels
bug Something isn't working

Comments

@davifebba
Copy link

davifebba commented Feb 23, 2024

Hello, I'm learning Ax and playing with the tutorial code of the Service API, but I noticed that render(ax_client.get_contour_plot()) does not display all trials over the response surface, but displays a total of (number of trials -1) samples. How to display the full set of samples on the response surface?

For example, running an optimization campaign for 10 trials:

from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6, Branin
from ax.utils.notebook.plotting import init_notebook_plotting, render
from ax.modelbridge.registry import Models
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.models.torch.botorch_modular.surrogate import Surrogate
from botorch.acquisition.monte_carlo import (
qNoisyExpectedImprovement,
)
from botorch.models.gp_regression import FixedNoiseGP

import numpy as np
init_notebook_plotting()

gs = GenerationStrategy(
steps=[
GenerationStep( # Initialization step
# Which model to use for this step
model=Models.SOBOL,
# How many generator runs (each of which is then made a trial) to produce with this step
num_trials=5,
# How many trials generated from this step must be COMPLETED # before the next one
min_trials_observed=5,
),
GenerationStep( # BayesOpt step
model=Models.BOTORCH_MODULAR,
# No limit on how many generator runs will be produced
num_trials=-1,
model_kwargs={ # Kwargs to pass to BoTorchModel.__init__
"surrogate": Surrogate(FixedNoiseGP),
"botorch_acqf_class": qNoisyExpectedImprovement,
},
),
]
)

ax_client = AxClient(generation_strategy = gs)

ax_client.create_experiment(
name="hartmann_test_experiment",
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
"value_type": "float", # Optional, defaults to inference from type of "bounds".
"log_scale": False, # Optional, defaults to False.
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x3",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x4",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x5",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x6",
"type": "range",
"bounds": [0.0, 1.0],
},
],
objectives={"hartmann6": ObjectiveProperties(minimize=True)},
parameter_constraints=["x1 + x2 <= 2.0"], # Optional.
#outcome_constraints=["l2norm <= 1.25"], # Optional.
)

def evaluate(parameters):
x = np.array([parameters.get(f"x{i+1}") for i in range(6)])
# In our case, standard error is 0, since we are computing a synthetic function.
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}

for i in range(10):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))

render(ax_client.get_contour_plot())

ax_1
ax_2

@mgarrard
Copy link
Contributor

Hey @davifebba, thanks for the question - i can't see the full data frame that shows the parameterizations, but my hunch is that one of the arms is out of sample -- likely one of the Sobol arms. The contour plots the in-sample trials. Could you paste the full dataframe or check if all the arms are in sample?

@davifebba
Copy link
Author

Thanks for replying! Could you please advise on how to check if all the arms are in-sample?
From another run of the script above, I have this data frame below:
ax_3

@mgarrard
Copy link
Contributor

Hi @davifebba, I recreated the notebook in colab and was able to see that you are right the last trial that was ran isn't being populated in the contour plots correctly right now. Will flag this as a bug and look into a solution further - just wanted to update you. In the meantime the plots should still be pretty informative, and the dataframes are also accurate.

@mgarrard mgarrard added the bug Something isn't working label Feb 28, 2024
@ligerlac
Copy link

ligerlac commented Apr 1, 2024

The problem appears to be that get_contour_plot() displays the model's training data, which does not include the very last trial. I created a PR here: #2305

@mgarrard mgarrard self-assigned this Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants