New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for state of art hyperparameter optimization packages #2
Comments
Hi, thanks for your feedback I don't plan to add these as default addons in the library... If you are looking for only parameters tuning, using only the cited hyperparameter optimization techniques/library may be optimal. In parameters tuning + feature selection, the feature selection process is applied for each combination of parameters iteratively. In other words, for each parameter's config, we search for the best features and store the results. In the end, the best configuration is retrieved. The best option I consider is to use tune-sklearn with BoostRFE or BoostBoruta as estimator. To make this possible it seems I have to add for sure set_params/get_params methods to BoostRFE and BoostBoruta. I think I will do that and if the solution will be stable I'll create a new version of the library. If you support the project don't forget to leave a star ;-) |
@oldrichsmejkal you can use tune-sklearn (and other sklearn objects) passing shap-hypetune estimators as wrapper. Below a full example:
|
It would be nice to have options to select some state of art technique for hyperparameter optimization.
Such as:
https://scikit-optimize.github.io/stable/
https://github.com/optuna/optuna
or maybe the best (should be drop in replacement for scikit Grid/Random search, but support advanced techniques from packages above)
https://github.com/ray-project/tune-sklearn
The text was updated successfully, but these errors were encountered: