Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

warm starting experiment with historical records #1297

Closed
chanansh opened this issue Nov 30, 2022 · 14 comments
Closed

warm starting experiment with historical records #1297

chanansh opened this issue Nov 30, 2022 · 14 comments
Assignees
Labels
question Further information is requested

Comments

@chanansh
Copy link

what is the best practice to warm start a model with past historical observation?

@bernardbeckerman
Copy link
Contributor

Hi @chanansh, can you help clarify some terminology? When you say "warm-start a model," can you be a little more precise? For example, are you trying to perform hyperparameter optimization on a Machine Learning model? Or are you referring to some other type of model? And when you're talking about past historical observation, what type of data/observation are you talking about precisely? The more you can help clarify your use-case, the more precisely we will be able to help answer your question. Thanks!

@bernardbeckerman bernardbeckerman self-assigned this Nov 30, 2022
@bernardbeckerman bernardbeckerman added the question Further information is requested label Nov 30, 2022
@bernardbeckerman
Copy link
Contributor

If you're just looking to attach iterations to your experiment, you can try simply using attach_trial. To do so, please see this section of the service api tutorial.

Closing this out for now but please feel free to comment and/or open this back up if you have further questions!

@chanansh
Copy link
Author

chanansh commented Dec 6, 2022

We want to optimize f(x) by a two step iterations:
x=acquisition
Try f(x)

But I have past historical x, f(x) tuples.
I have seen an API to try predefined x, but I didn't see a way to intake past observations

@bernardbeckerman
Copy link
Contributor

How are you defining your metric(s)? If you're defining custom metrics as in the developer API tutorial ([link])(https://ax.dev/tutorials/gpei_hartmann_developer.html#8.-Defining-custom-metrics), you could include in your fetch_trial_data method a lookup of the parameterizations for which you already have results (your historical xs), and return the corresponding results (your historical f(x)s) instead of fetching the results as you normally would. You'll first need to seed the experiment with the trials parameterized by your historical xs, and complete those trials as follows:

my_trial.mark_running(no_runner_required=True)
my_trial.mark_completed()

Hope that helps, and let me know how I can further assist : )

@chanansh
Copy link
Author

mmm. your code snippets does not include where I use past x_i or f(x_i).

@lena-kashtelyan
Copy link
Contributor

@chanansh, which Ax API are you using? Without knowing this it is not possible to fully answer your question with a code snippet you are looking for.

If Service API, you would do this, as @bernardbeckerman pointed out above:

# Step 1, as exemplified here: 
# https://ax.dev/tutorials/gpei_hartmann_service.html#Service-API-Exceptions-Meaning-and-Handling
params, trial_index = ax_client.attach_trial(...)
# Step 2, as exemplified here:
# https://ax.dev/tutorials/gpei_hartmann_service.html#4.-Run-optimization-loop
ax_client.complete_trial(trial_index=trial_index, ...)

Does this answer your question?

@amgonzalezd
Copy link

Hi, I think i have a similar question. I would like to apply BO to a process, from which I already have some historical data, that means I have x and f(x).
As a result i would like to get something like in this picture
image

Is it possible?

@amgonzalezd
Copy link

Perhaps two more words to the motivation/idea of it.
For a production process, I usually have some historical data that i would like to include into my BO experiments, to take advantage of this already existing information.
It would be great to have some kind of "HistoricalModel" (or something like that) with its respective ModelBridge. You could instantiate it passing the filename and some metadata (features and objective). The gen() method could sample n points from the data and somehow to complete the trial the objective values have to be also passed to the experiment.

Perhaps there is another way to handle this, by overriding or "faking" the trials? Or setting the runner differently? I'm new into ax and don't have the whole picture of what would be the best part to implement something like this

@bernardbeckerman
Copy link
Contributor

Hi there, apologies for taking so long on this. Please see the example below, which is based on the Service API tutorial. I've copied the first part from the tutorial sections 1 through 3, then I insert one manual arm between steps 3 and 4, then I perform the optimization loop in step 4. Please let me know if this addresses your questions, and anything else I can help you with.

import numpy as np
from ax.core.arm import Arm
from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6

ax_client = AxClient()
ax_client.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
    ],
    objectives={"hartmann6": ObjectiveProperties(minimize=True)},
    parameter_constraints=["x1 + x2 <= 2.0"],  # Optional.
    outcome_constraints=["l2norm <= 1.25"],  # Optional.
)

def evaluate(parameters):
    x = np.array([parameters.get(f"x{i+1}") for i in range(6)])
    # In our case, standard error is 0, since we are computing a synthetic function.
    return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x ** 2).sum()), 0.0)}

#### BEGIN ADDED CODE ####

# Manually add a new trial to our experiment.
my_new_trial = ax_client.experiment.new_trial()

# Add a custom generation method (optional).
my_new_trial._properties["generation_model_key"] = "my custom method"

# Manually add an arm (containing our parameterization) to our new trial.
my_new_trial.add_arm(
    arm=Arm(
        parameters={"x1": 0.5, "x2": 0.5, "x3": 0.5, "x4": 0.5, "x5": 0.5, "x6": 0.5},
        name="mid",
    )
)

# Trial must be running before it can be completed.
my_new_trial.mark_running(no_runner_required=True)

# These values for hartmann6 and l2norm were obtained by manually running this example's `evaluate` function.
ax_client.complete_trial(
    trial_index=my_new_trial.index,
    raw_data={
        "hartmann6": (-0.5053149917022333, 0.0),
        "l2norm": (1.224744871391589, 0.0),
    },
)

#### END ADDED CODE ####

for i in range(25):
    parameters, trial_index = ax_client.get_next_trial()
    # Local evaluation here can be replaced with deployment to external system.
    ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))

Once you run that, you can get the experiment dataframe using

from ax.service.utils.report_utils import exp_to_df
exp_to_df(ax_client.experiment)

which should look something like this:

image

To do this for multiple arms, you can execute the added code above in a loop. Please let me know if that helps!

@amgonzalezd
Copy link

Hi, yes that worked!
Im just having troubles on running the further trials.
If I want to use for example a BOTORCH Model, how would I do it? I haven't discovered how to set on the Service API the configurations that i would make in the Developer API (like Botorch ModelBridge with a CustomGP as Surrogate and Optimization Configuration with a set Metric)
I've also tried to beginn a new experiment with the Developer API and use the function warm_start_from_old_experiment, but there's a problem with the metrics.
I believe it has to do with the naming of the arm done here:
# Manually add an arm (containing our parameterization) to our new trial.
my_new_trial.add_arm( arm=Arm( parameters={"x1": 0.5, "x2": 0.5, "x3": 0.5, "x4": 0.5, "x5": 0.5, "x6": 0.5}, name="mid", ) )

because while fetching the data for the fitting of the GP-Surrogate, it just takes one single point (i've loaded a lot of "historical" points... is that possible?
If not I'd be glad, if you show me how to solve this.

Thanks a lot!!!!

@bernardbeckerman
Copy link
Contributor

Let me parse this into two separate questions. Please let me know if I missed anything you're wondering about.

  1. How do I do Bayesian Optimization using the Service API?

The function AxClient.create_experiment() in the example above creates the experiment and chooses a generation strategy based on the attributes of the experiment via AxClient. _set_generation_strategy. This automated generation strategy selection function chooses BO (using BoTorch) whenever possible. You shouldn't have to do anything special to start performing BO after you load your historical examples, although there are some cases (especially large search spaces, or search spaces with lots of unordered choice parameters) where a pseudo-random Sobol generation strategy is favored.

Note that all BO experiments generally start with a few Sobol trials to seed the BO model with, so it may be that you need to execute a certain number of trials before the BO model takes effect. You can see this number by printing ax_client.generation_strategy._steps[0].num_trials.

If you'd like to customize your generation strategy, you can supply some kwargs to the function choose_generation_strategy by supplying AxClient.create_experiment() with a dict choose_generation_strategy_kwargs containing the kwargs you'd like to specify. It is possible to configure your own GenerationStrategy and supply it to the AxClient constructor, however users tend to obtain better results by customizing their generation strategies via choose_generation_strategy_kwargs and letting choose_generation_strategy do most of the heavy lifting.

  1. I'm running into an issue while warm-starting after having also added custom arms, because while fetching the data for the fitting of the GP-Surrogate, it just takes one single point (i've loaded a lot of "historical" points... is that possible?

Can you share your example, and how you can tell that the manually added arms aren't making it into the GP fit? That would be super useful!

@amgonzalezd
Copy link

@bernardbeckerman thank you!

  1. got it! thanks!
  2. Here is my example:

I have some historical data:
image

and would like to attach this trials into my experiment using the Service API.
The further trials should be done with the BotorchModular model, skiping SOBOL sampling, because I then already have some trials. Then:

generation_strategy=GenerationStrategy(
    name="botorch",
    steps=[
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            num_trials=-1
        )
    ]
)

ax_client = AxClient(generation_strategy=generation_strategy)
ax_client.create_experiment(
    name="mixed",
    parameters=parameters,
    objectives={"AdaptedBranin": ObjectiveProperties(minimize=True)}
)

First i attach the historical trials as in your example given above

for index, row in X.iterrows():  # X are the features
    objective = Y.loc[index]   # Y is the AdaptedBranin Objective
    trial = ax_client.experiment.new_trial()
    trial._properties["generation_model_key"] = "historical"
    trial.add_arm(arm=Arm(parameters=row.to_dict(), name="mid"))
    trial.mark_running(no_runner_required=True)
    ax_client.complete_trial(
        trial_index=trial.index,
        raw_data={'AdaptedBranin': objective}
    )

And they are being added correctly (with the exception that the "generation_model_key" is not being set properly, but it isn't critical.
image

Afterwards I continue with the normal client loop and my evaluate test function "AdaptedBranin"

test_function = AdaptedBranin()

def evaluate(parameters):
    x = np.array([parameters.get(f"x{i+1}") for i in range(test_function.dim)])
    tf_botorch = from_botorch(botorch_synthetic_function=test_function)
    return {
        'AdaptedBranin': (tf_botorch(x), None)
    }
for i in range(25):
    parameters, trial_index = ax_client.get_next_trial()
    ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))

The problem is that while the Botorch model fetches the data, it just takes the first value of the historical data to train and then I get wrong results
image

I've found out, that the problem relies on naming of the arms: all the arms are called "mid".
If I rename the 'historical' arms by setting
trial.add_arm(arm=Arm(parameters=row.to_dict(), name=f"{trial.index}_0"))

then everything works correctly
image

Thanks!!!

@bernardbeckerman
Copy link
Contributor

That's great! Yes arms can be tricky that way - may be worth printing a warning to the user if their arm definition collides with another. Thanks for this detailed response and for raising these issues! Closing this out but please let me know if you have any followup questions! In particular, I'm not sure why the generation_method isn't showing up correctly once you're naming your arms distinctly, since it seems close to the example I posted above - this may have to do with your Ax version though, since this was a relatively recent update (#1288). If you get a chance, let me know if updating to a more recent build fixes the issue. If not, no worries, and thanks again!

@Abrikosoff
Copy link

@bernardbeckerman Hi, sorry for reopening this old thread (I wanted to avoid excessive clutter), but I have a quick question regarding 'warm starting' an experiment: in my use case I do old_ax_client = AxClient.load_from_json_file(json_file), but now i want to replace the old GenerationStrategy with a new one; i.e., I have defined a new GS via

generation_strategy_new = GenerationStrategy(
                        steps=[
                            GenerationStep(
                                ...
                            ), 
                            GenerationStep(
                               ...
                            ),
                        ]
                    )

and i want to replace the GS in old_ax_client with generation_strategy_new, but keep everything else the same. How can i do this correctly? I saw that I can probably call old_ax_client._set_generation_strategy, but I am not sure whether it would be correct to just do old_ax_client._set_generation_strategy(generation_strategy_new).

What would be the correct way to go about this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants