Skip to content

Conversation

@SimonBlanke
Copy link
Collaborator

Adds most of the optuna samplers as optimizers. Testing is done via "test_all_objects.py" and "get_test_params"-method. I also added examples with extensive comments.

@SimonBlanke SimonBlanke added this to the v5.0 milestone Aug 16, 2025
@fkiraly fkiraly changed the title Optuna [ENH] optuna optimizer interface Aug 16, 2025
Copy link
Collaborator

@fkiraly fkiraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Brilliant!

Just some minor remarks:

  • examples style (main of py) feels a bit clunky. I think users expect jupyter. Also, do you want to order the files, e.g., is there a specific order that is good for users to look at?
  • question: why do we need torch?
  • optuna distinguishes samplers from optimizers - so, to avoid confusion, I would not use the word "sampler" in the class name. For instance, CmaEs, or BayesianOptGP. GridSampler is simply GridSearchOptuna, no?
  • should the adapter / base class go in opt._adapters?

@SimonBlanke
Copy link
Collaborator Author

examples style (main of py) feels a bit clunky. I think users expect jupyter. Also, do you want to order the files, e.g., is there a specific order that is good for users to look at?

For me it is the opposite. If a projects has no examples in form of *.py files it is an instand turn-off. My idea of those two concepts is the following:

  • examples: (for me) the first thing to use. Just run it and see what happens.
  • notebooks: more like a tutorial. With explanations and plots.

I often went the route, that examples should always be present from the start, because it is a very basic way to show how the package works. Notebooks are an additional feature for a more "guided" tutorial-like feature.
Or in short: Let's do this later.

question: why do we need torch?

Some tests failed without it.

optuna distinguishes samplers from optimizers - so, to avoid confusion, I would not use the word "sampler" in the class name. For instance, CmaEs, or BayesianOptGP. GridSampler is simply GridSearchOptuna, no?

I'll look into this

should the adapter / base class go in opt._adapters?

right!

@SimonBlanke
Copy link
Collaborator Author

optuna distinguishes samplers from optimizers

As far as I can tell, this seems incorrect: I did not find a optimizer class in the optuna docs. The samplers are optimizers in the optuna context.

so, to avoid confusion, I would not use the word "sampler" in the class name.

We can of course rename those classes to optimizers for our package. Because we do not label them as samplers. That would be okay for me. Should we proceed this way?

@fkiraly
Copy link
Collaborator

fkiraly commented Aug 16, 2025

optuna distinguishes samplers from optimizers

As far as I can tell, this seems incorrect: I did not find a optimizer class in the optuna docs. The samplers are optimizers in the optuna context.

I think this is not correct. optuna is basically a single family of optimizers, and you can configure two parts:

  • the sampler, which suggests new points to explore
  • the pruner, which stops or prunes trials

https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/003_efficient_optimization_algorithms.html

The optimization algorithm is not just the sampler, but the entire algorithm that uses a sampler and a pruner.

So, yes, you can construct optimization algorithms from the samplers, but these would be distinct from the full optuna optimization pipeline.

What I would have naively expected is a single optimizer where you can pass a sampler and a pruner.

Of course, using the samplers also makes sense, but these give rise to different algorithms.

Because we do not label them as samplers. That would be okay for me. Should we proceed this way?

Yes, conceptually they are distinct from samplers, so I would agree - they are the "simple" optimization algorithm based on a sampler, respectively.

@SimonBlanke
Copy link
Collaborator Author

hmm, okay. I only concentrated on the samplers in this PR, but if the pruners are important to "create" the entire optimization algorithm, then I should include them. This would be important to avoid possible breaking changes if we want to add them later, right? I'll dive into this topic and get back to you later.

@fkiraly
Copy link
Collaborator

fkiraly commented Aug 16, 2025

I think it is fine as-is - I think they are not that important for now.

optuna is convoluted and does not have a clear separation between experiment and optimizer - much of it is baked together in Study.optimize:
https://github.com/optuna/optuna/blob/master/optuna/study/study.py

I also do not think we will have downwards compatibility problems if we add the more complex algorithms, so I would suggest: 1. we merge this, 2. release, 3. investigate later on the more complex algorithms.

@SimonBlanke SimonBlanke requested a review from fkiraly August 16, 2025 16:11
@SimonBlanke SimonBlanke merged commit b050e9f into master Aug 16, 2025
40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants