Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting information about each trial as soon as it completes? #302

Closed
hbredin opened this issue Jan 15, 2019 · 5 comments
Closed

Getting information about each trial as soon as it completes? #302

hbredin opened this issue Jan 15, 2019 · 5 comments

Comments

@hbredin
Copy link
Contributor

hbredin commented Jan 15, 2019

It would be nice to have a programmatic way to get information about each trial as soon as it completes (at least in the sequential optimization case...).

Something like:

for trial in study.optimize_iter(objective, n_trials=10):
    # trial.params contains the set of parameters that were last tested
    # trial.value contains the corresponding objective value
    # trial.trial_id contains the trial ID
    pass

My main use case for this is to be able to run some kind of "callback function" every time a new best set of hyper-parameters is found.

As far as I understand, there is no easy way to do this -- except maybe by doing this directly in the objective function but this is not very convenient.

Does that make sense?

@g-votte
Copy link
Member

g-votte commented Jan 16, 2019

Sounds cool. We Optuna dev team will discuss possible designs and implementations for the proposed feature.

@hbredin
Copy link
Contributor Author

hbredin commented Jan 16, 2019

Great. Thanks. Two possible API designs that I can think of are:

  1. making .optimize iterable thanks to a boolean flag:
for trial in study.optimize(objective, n_trials=10, iter=True):
    # trial.params contains the set of parameters that were last tested
    # trial.value contains the corresponding objective value
    # trial.trial_id contains the trial ID
    pass
  1. making .optimize support an optional callback parameter
def callback(trial):
    # trial.params contains the set of parameters that were last tested
    # trial.value contains the corresponding objective value
    # trial.trial_id contains the trial ID
    pass

study.optimize(objective, n_trials=10, callback=callback)

I believe the former is more powerful and flexible than the latter.

@g-votte
Copy link
Member

g-votte commented Jan 17, 2019

Thanks for your suggestion. I think option 2 is preferable from the perspective of implementation cost and compatibility with current Optuna API.

Out of curiosity, in your use case, what are you trying to do in the callback function?

@hbredin
Copy link
Contributor Author

hbredin commented Jan 17, 2019

Here are a few things the callback function might do:

  • every time a better set of hyper-parameters is found, run a validation experiment (possibly asynchronously)
  • log trial details to tensorboard (e.g. using tensorboardX)
  • display personalized progress report to the user (instead of optuna's builtin logging)

The last point might be tricky to do with option 2 (callback) but very easy with option 1 (iterator).

@sile
Copy link
Member

sile commented Oct 1, 2019

I close this issue because this feature was implemented in #480.
Feel free to re-open this issue if you have further concerns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants