-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to save and load experiment/model from optimize
#87
Comments
Hi, @HanGuo97!
experiment = load("experiment.json") # `load` docs: https://ax.dev/api/index.html#ax.load
best_objectives = np.array([[trial.objective_mean for trial in experiment.trials.values()]])
best_objective_plot = optimization_trace_single_method(
y=np.minimum.accumulate(best_objectives, axis=1),
optimum=hartmann6.fmin,
title="Model performance vs. # of iterations",
ylabel="Hartmann6",
)
render(best_objective_plot) For the response surface contour plots, we will provide an ability to reload those plots soon. |
Thanks for the response! |
@lena-kashtelyan any updates on this front? Specifically, on point 2? How can one obtained the best parameters and values from just an experiment object? Thanks! |
@lena-kashtelyan Have you got any updates on these features? I'm trying to do the same things. |
Hi, @ksanjeevan, @theinlinaung2010! Sorry to have missed your tags, the issue was closed so we weren't notified in time. Feel free to reopen the issue if there is a follow-up, in the future, to make sure we get back to you as soon as we can! For 2), you can use To use storage functionality for experiment and models used in optimization, I would recommend using our Service API that is well integrated with our storage layer (you can store locally to a .json file or to an SQL backend). You can check out our API comparison code snippets and the Service API tutorial to see how to get started and how to leverage storage. Let us know if you have further questions, I'll keep the issue open for now. |
Closing since it appears that my last answer resolved the questions. @ksanjeevan, @theinlinaung2010, feel free to reopen if you have follow-ups! |
Hi @lena-kashtelyan, I was searching through the web to see if we can explicitly save the models, and this looked like the most similar issue. I was wondering if we can locally save models, such as If the model cannot be saved, is there a way where we can preserve the Let me know if anything is unclear, or if you want me to open a separate issue! |
Hi @nwrim ! The easiest way to accomplish what you want is probably to save the data that you're using to fit the model (rather than the model itself), and then you can refit it whenever you want. If you want to take advantage of Ax's storage to do so, you would use |
Just wanted to quickly chime in that you don't need to have an evaluation function to use the Service API –– it's actually a perfect fit for the case of "do some task, get results, feed them back to Ax model". The So if you wanted to leverage the Service API and its convenient storage setup, you totally could! An 'evaluation function' is just a stub for the purpose of the tutorial to show that one can do whatever one needs with the parameter configuration in the given trial to get the data to log back to Ax : ) This has been a source of confusion to multiple folks it seems, so we'll think about how to make it clearer in the tutorial. Even with Service API though, what would be happening under the hood is not storage of model but what @ldworkin said above –– under the hood we will re-fit the model to new data when data is available and just store information about the model settings (and a bit of its state in some cases). So it would just be a convenience wrapper around that same functionality. |
Hi @ldworkin and @lena-kashtelyan! Thanks for the response! I really appreciate it. The reason behind why I wanted to store the model, rather than the experiment+data itself, was because the model seemed to generate different suggestions when For using the service API, the greatest reason why I did not use it was because I thought adding arms that the model did not suggest (guessing through Thanks again for the response! |
Are you concerned about the model predictions being reproducible or the candidates that the model generates? In either case, it should generally be possible to make that deterministic by fixing the random seed in the proper places (so long as you pass in the same data of course). If that would be useful we could give you some pointers on how to achieve this. |
Hi @Balandat! Yes, we are essentially concerned about reproducibility, since we likely will have to release data from all steps (predictions, candidates, etc). It will be great if you can point us to seeding the models. I used the |
You can also pass |
So the brute force approach would be to just try to set For the dev API it's possible to pass the args down to the Modelbridge's
(lots of dicts I know). I don't think we have this exposed in the service API though at this point, it may make sense to add an optional |
@nwrim, the approach @Balandat suggests above is actually also possible in the Service API, but one would have to manually construct the generation strategy and pass it to AxClient. Here is how:
|
This is so, so helpful! I will be trying out the suggested approaches, both in dev API and service API, and let you know if anything does not pan out. Please feel free to close the issue! |
Okay! Don't hesitate to reopen if you run into any issues or have follow-up questions : ) |
This is how I recreated a model from experiment data. Did not need to call experiment.attach_data(). Only used the save and load functions from from ax import save, load m = get_GPEI(experiment, experiment.fetch_data()) Visualize: m = get_GPEI(experiment, experiment.fetch_data()) render(plot_contour(model=m, param_x='lr', param_y='momentum', metric_name='accuracy')) |
@lena-kashtelyan I have a large model that takes a while to fit since it has ~2000 datapoints across 3 objectives with ~22 parameters (a few are categorical) and two parameter constraints. Do you have any suggestions for saving and reloading of the model without refitting (Service API)? Something hacky or non-portable to other machines would be fine. For example, do you know if using pickle would work? If not, I'll probably give it a try soon. |
Kindly ask if there is a solution for this. |
Hi, from the documentation, the
optimize
function returns the(best_parameters, values, experiment, model)
tuple. I'm wondering what's the best practices for saving these values (e.g. for visualization in a different machine)? Also, is it possible to interrupt a model and later resume from the state in theoptimize
API? Thanks!The text was updated successfully, but these errors were encountered: