Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

investigate hyperparameter optimization suggestions by Jurriaan and Berend and Carlos #34

Closed
vincentvanhees opened this issue Jul 13, 2016 · 7 comments
Assignees

Comments

@vincentvanhees
Copy link
Contributor

No description provided.

@vincentvanhees vincentvanhees added this to the Mimimal viable product milestone Jul 13, 2016
@vincentvanhees vincentvanhees self-assigned this Jul 13, 2016
@dafnevk
Copy link
Member

dafnevk commented Jul 13, 2016

I also found this blogpost very useful:
http://blog.turi.com/how-to-evaluate-machine-learning-models-part-4-hyperparameter-tuning

@dafnevk
Copy link
Member

dafnevk commented Jul 13, 2016

And we can look into Optunity they also support for example TPE and other optimizers.

@dafnevk
Copy link
Member

dafnevk commented Jul 13, 2016

Ah sorry but I see that Optunity uses hyperopt under the hood for TPE so we might run into the same problems as hyperas (#35)

@vincentvanhees
Copy link
Contributor Author

vincentvanhees commented Jul 18, 2016

Optunity allows for CMA-ES optimizer. According to 'Algorithms for Hyper-parameter Optimizations' by James Bergstra, " CMA-ES is a state-of-the-art gradient-free evolutionary algorithm for optimization on continuous domains, which has been shown to outperform the Gaussian search EDA. Notice that such a gradient-free approach allows non-differentiable kernels for the GP regression."

I struggle to digest this. Does this mean that it can handle non-real numbers as hyperparameter, like we want or is a non-differentiable kernel something different?

@vincentvanhees
Copy link
Contributor Author

Rescale is a commercial tool to train deep networks in the cloud, including Keras, Torch,... Part of the service is Keras hyperparameter optimization. https://blog.rescale.com/deep-neural-network-hyper-parameter-optimization/ It may be good to know that these services exist.

@dafnevk
Copy link
Member

dafnevk commented Jul 19, 2016

In that blogpost, they use SMAC - which trains random forests on the results, and is better on categorical variables according to the blog of Alice Zheng.
SMAC is available in python in the pysmac package

@dafnevk
Copy link
Member

dafnevk commented Jul 20, 2016

Another interesting blogpost: http://www.argmin.net/2016/06/20/hypertuning/ (also the comments below)
Conclusion is that bayesian methods such as TPE and SMAC are only somewhat faster in finding an optimum than random search, the speedup is not more than 2x - and random search is easily parallelizable.

It seems that TPE and SMAC are the only algorithms that are really suitable for the type of problem that we have: with mixed categorical, discrete and continuous hyperparameters.
This paper compares the methods. SMAC seems to be better than TPE in a majority of the medium/high-dimensional cases.

@dafnevk dafnevk closed this as completed Aug 9, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants