Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add hyperparameter importance chart #54

Merged
merged 9 commits into from Mar 14, 2021

Conversation

chenghuzi
Copy link
Collaborator

Contributor License Agreement

This repository (optuna-dashboard) and Goptuna share common code.
This pull request may therefore be ported to Goptuna.
Make sure that you understand the consequences concerning licenses and check the box below if you accept the term before creating this pull request.

  • I agree this patch may be ported to Goptuna by other Goptuna contributors.

Reference Issues/PRs

Fixs #25

Add hyperparameter importance chart

There're two ways to add hyperparameter importance chart. One is to implement the evaluator which completely mirrors the one used by optuna via typescript and another is to use the optuna optuna.importance.get_param_importances API directly.

The former makes the logic cleaner but involves extra computation in the browser, and implementing a complete parameter importance function by ts requires an update whenever the optuna API changes.

The latter one is much more flexible and only requires introducing another API call. But the con is it takes longer for the server to get the importance result.

Here we take the second approach and only consider the single objective function case. For future multi-objective case, we can extend the API with query parameters representing params and target selected by the user.

@c-bata
Copy link
Member

c-bata commented Mar 10, 2021

Wow, great! I'll review tomorrow.

There're two ways to add hyperparameter importance chart. One is to implement the evaluator which completely mirrors the one used by optuna via typescript and another is to use the optuna optuna.importance.get_param_importances API directly.

The former makes the logic cleaner but involves extra computation in the browser, and implementing a complete parameter importance function by ts requires an update whenever the optuna API changes.

The latter one is much more flexible and only requires introducing another API call. But the con is it takes longer for the server to get the importance result.

Here we take the second approach and only consider the single objective function case. For future multi-objective case, we can extend the API with query parameters representing params and target selected by the user.

I completely agree with your decision. It is quite difficult to port an importance evaluator into TypeScript because fANOVA evaluator requires the random forest regression model.

optuna_dashboard/app.py Outdated Show resolved Hide resolved
optuna_dashboard/app.py Outdated Show resolved Hide resolved
chenghuzi and others added 5 commits March 11, 2021 18:42
Co-authored-by: Masashi Shibata <c-bata@users.noreply.github.com>
Co-authored-by: Masashi Shibata <c-bata@users.noreply.github.com>
Co-authored-by: Masashi Shibata <c-bata@users.noreply.github.com>
Copy link
Member

@c-bata c-bata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good and easy to read! I left some minor suggestions.

plotParamImportances(paramsImportanceData)
}
fetchAndPlotParamImportances(studyId)
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

param-importance.mp4

Please set deps to avoid sending API requests at every render.

Suggested change
})
}, [studyId])

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh thanks! I missed that part, re-rendering could be unnecessary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just thinking sometimes studyId could not be a proper dependency as trials may get updated while the studyId remains the same while parameter importance requires a re-evaluation.

optuna_dashboard/app.py Outdated Show resolved Hide resolved
chenghuzi and others added 4 commits March 13, 2021 13:15
Co-authored-by: Masashi Shibata <c-bata@users.noreply.github.com>
Co-authored-by: Masashi Shibata <c-bata@users.noreply.github.com>
Copy link
Member

@c-bata c-bata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!! For further improvements, we can work on following tasks in another pull requests:

  • In-memory cache: get_param_importances() is computationally expensive because of the training of RandomForest model. So it's better to cache the result at server-side like trials are cached.
  • Multi-objective support: It seems that Optuna's importance module does not still support multi-objective study. So we need to send a pull request to Optuna for this. It seems to be supported.
  • Calculate hyperparameter importances on Web Assembly: After I implement RandomForest and fANOVA algorithm in Goptuna, we can calculate hyperparameter importances at WebAssembly. It's a bit technically difficult, but interesting.

@c-bata c-bata merged commit 03e176b into optuna:main Mar 14, 2021
@c-bata c-bata linked an issue Mar 14, 2021 that may be closed by this pull request
2 tasks
@c-bata c-bata added this to the v0.4.0 milestone Apr 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Hyperparameter importance support
2 participants