You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I like the integrated approach of your autoML package.
Can optimization be improved (lower error with less training budget) compared to random tuning and frace optimization by including mlr hyperopt / mlrMBO as an option?
In mlrhyperopt, there is access included to a couple of standard tuning ranges according to their user fed hyperparameter "database", for several standard models. http://mlrhyperopt.jakob-r.de/parconfigs
I have used mlrhyperopt from time to time and found the results I got from that hyperparameter tuning api quite useful.
The text was updated successfully, but these errors were encountered:
Whilst having used mlrHyperOpt to generate most of the hyper parameter grids internally in the package, I do not use it for all models.
Bayesian optimisation can be included, I excluded it due to installation issues when testing the package, quite quick to implement again. At the moment Iterated f-racing is also included if random tuning does not suffice.
I like the integrated approach of your autoML package.
Can optimization be improved (lower error with less training budget) compared to random tuning and frace optimization by including mlr hyperopt / mlrMBO as an option?
In mlrhyperopt, there is access included to a couple of standard tuning ranges according to their user fed hyperparameter "database", for several standard models.
http://mlrhyperopt.jakob-r.de/parconfigs
I have used mlrhyperopt from time to time and found the results I got from that hyperparameter tuning api quite useful.
The text was updated successfully, but these errors were encountered: