-
Notifications
You must be signed in to change notification settings - Fork 65
[ENH] optuna optimizer interface
#155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
fkiraly
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Brilliant!
Just some minor remarks:
- examples style (main of py) feels a bit clunky. I think users expect jupyter. Also, do you want to order the files, e.g., is there a specific order that is good for users to look at?
- question: why do we need
torch? optunadistinguishes samplers from optimizers - so, to avoid confusion, I would not use the word "sampler" in the class name. For instance,CmaEs, orBayesianOptGP.GridSampleris simplyGridSearchOptuna, no?- should the adapter / base class go in
opt._adapters?
For me it is the opposite. If a projects has no examples in form of *.py files it is an instand turn-off. My idea of those two concepts is the following:
I often went the route, that examples should always be present from the start, because it is a very basic way to show how the package works. Notebooks are an additional feature for a more "guided" tutorial-like feature.
Some tests failed without it.
I'll look into this
right! |
As far as I can tell, this seems incorrect: I did not find a optimizer class in the optuna docs. The samplers are optimizers in the optuna context.
We can of course rename those classes to optimizers for our package. Because we do not label them as samplers. That would be okay for me. Should we proceed this way? |
I think this is not correct.
The optimization algorithm is not just the sampler, but the entire algorithm that uses a sampler and a pruner. So, yes, you can construct optimization algorithms from the samplers, but these would be distinct from the full What I would have naively expected is a single optimizer where you can pass a sampler and a pruner. Of course, using the samplers also makes sense, but these give rise to different algorithms.
Yes, conceptually they are distinct from samplers, so I would agree - they are the "simple" optimization algorithm based on a sampler, respectively. |
|
hmm, okay. I only concentrated on the samplers in this PR, but if the pruners are important to "create" the entire optimization algorithm, then I should include them. This would be important to avoid possible breaking changes if we want to add them later, right? I'll dive into this topic and get back to you later. |
|
I think it is fine as-is - I think they are not that important for now.
I also do not think we will have downwards compatibility problems if we add the more complex algorithms, so I would suggest: 1. we merge this, 2. release, 3. investigate later on the more complex algorithms. |
Adds most of the optuna samplers as optimizers. Testing is done via "test_all_objects.py" and "get_test_params"-method. I also added examples with extensive comments.