Releases: optuna/optuna
v3.6.1
This is the release note of v3.6.1.
Bug Fixes
- [Backport] Fix Wilcoxon pruner bug when best_trial has no intermediate value #5370
- [Backport] Address issue#5358 (#5371)
- [Backport] Fix
average_is_best
implementation inWilcoxonPruner
(#5373)
Other
- Bump up version number to v3.6.1 (#5372)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
v3.6.0
This is the release note of v3.6.0.
Highlights
Optuna 3.6 newly supports the following new features. See our release blog for more detailed information.
- Wilcoxon Pruner: New Pruner Based on Wilcoxon Signed-Rank Test
- Lightweight Gaussian Process (GP)-Based Sampler
- Speeding up Importance Evaluation with PED-ANOVA
- Stricter Verification Logic for FrozenTrial
- Refactoring the Optuna Dashboard
- Migration to Optuna Integration
Breaking Changes
- Implement
optuna.terminator
usingoptuna._gp
(#5241)
These migration-related PRs do not break the backward compatibility as long as optuna-integration v3.6.0 or later is installed in your environment.
- Move TensorBoard Integration (optuna/optuna-integration#56, thanks @dheemantha-bhat!)
- Delete TensorBoard integration for migration to
optuna-integration
(#5161, thanks @dheemantha-bhat!) - Remove CatBoost integration for isolation (#5198)
- Remove PyTorch integration (#5213)
- Remove Dask integration (#5222)
- Migrate the
sklearn
integration (#5225) - Remove BoTorch integration (#5230)
- Remove
SkoptSampler
(#5234) - Remove the
cma
integration (#5236) - Remove the
wandb
integration (#5237) - Remove XGBoost Integration (#5239)
- Remove MLflow integration (#5246)
- Migrate LightGBM integration (#5249)
- Add CatBoost integration (optuna/optuna-integration#61)
- Add PyTorch integration (optuna/optuna-integration#62)
- Add XGBoost integration (optuna/optuna-integration#65, thanks @buruzaemon!)
- Add
sklearn
integration (optuna/optuna-integration#66) - Move Dask integration (optuna/optuna-integration#67)
- Migrate BoTorch integration (optuna/optuna-integration#72)
- Move
SkoptSampler
(optuna/optuna-integration#74) - Migrate
pycma
integration (optuna/optuna-integration#77) - Migrate the Weights & Biases integration (optuna/optuna-integration#79)
- Add LightGBM integration (optuna/optuna-integration#81, thanks @DanielAvdar!)
- Migrate
MLflow
integration (optuna/optuna-integration#84)
New Features
- Backport the change of the timeline plot in Optuna Dashboard (#5168)
- Wilcoxon pruner (#5181)
- Add
GPSampler
(#5185) - Add a super quick f-ANOVA algorithm named PED-ANOVA (#5212)
Enhancements
- Add
formats.sh
based onoptuna/master
(optuna/optuna-integration#75) - Use vectorization for categorical distance (#5147)
- Unify implementation of fast non-dominated sort (#5160)
- Raise
TypeError
ifparams
is not adict
inenqueue_trial
(#5164, thanks @adjeiv!) - Upgrade
FrozenTrial._validate()
(#5211) - Import SQLAlchemy lazily (#5215)
- Add UCB for
optuna._gp
(#5224) - Enhance performance of
GPSampler
(#5274) - Fix inconsistencies between terminator and its visualization (#5276, thanks @SimonPop!)
- Enhance
GPSampler
performance other than introducing local search (#5279)
Bug Fixes
- Fix import path (optuna/optuna-integration#83)
- Fix
README.md
(optuna/optuna-integration#88) - Fix
LightGBMTuner
test (optuna/optuna-integration#89) - Fix
JSONDecodeError
inJournalStorage
(#5195) - Fix trial validation (#5229)
- Make
gp.fit_kernel_params
more robust (#5247) - Fix checking value in
study.tell
(#5269, thanks @ryota717!) - Fix
_split_trials
ofTPESampler
for constrained optimization with constant liar (#5298) - Make each importance evaluator compatible with doc (#5311)
Documentation
- Remove
study optimize
from CLI tutorial page (#5152) - Clarify the
GridSampler
with ask-and-tell interface (#5153) - Clean-up
faq.rst
(#5170) - Make Methods section hidden from Artifact Docs (#5188)
- Enhance README (#5189)
- Add a new section explaing how to customize figures (#5194)
- Replace legacy
plotly.graph_objs
withplotly.graph_objects
(#5223) - Add a note section to explain that reseed affects reproducibility (#5233)
- Update links to papers (#5235)
- adding link for module's example to documetation for the
optuna.terminator
module (#5243, thanks @HarshitNagpal29!) - Replace the old example directory (#5244)
- Add Optuna Dashboard section to docs (#5250, thanks @porink0424!)
- Add a safety guard to Wilcoxon pruner, and modify the docstring (#5256)
- Replace LightGBM with PyTorch-based example to remove
lightgbm
dependency in visualization tutorial (#5257) - Remove unnecessary comment in
Specify Hyperparameters Manually
tutorial page (#5258) - Add a tutorial of Wilcoxon pruner (#5266)
- Clarify that pruners module does not support multi-objective optimization (#5270)
- Minor fixes (#5275)
- Add a guide to PED-ANOVA for
n_trials>10000
(#5310) - Minor fixes of docs and code comments for
PedAnovaImportanceEvaluator
(#5312) - Fix doc for
WilcoxonPruner
(#5313) - Fix doc example in
WilcoxonPruner
(#5315)
Examples
- Remove Python 3.7 and 3.8 from tensorboard CI (optuna/optuna-examples#231)
- Specify black version in the CI (optuna/optuna-examples#232)
- Apply Black 2024 to codebase (optuna/optuna-examples#236)
- Remove MXNet examples (optuna/optuna-examples#237)
- Add an example of Wilcoxon pruner (optuna/optuna-examples#238)
- Make Keras examples Keras 3 friendly (optuna/optuna-examples#239)
- Remove a comment for keras that is not used anymore in this file (optuna/optuna-examples#240)
- Use Keras 3 friendly syntax in MLflow example (optuna/optuna-examples#242)
- Remove
-pre
option in therl
integration (optuna/optuna-examples#243) - Hotfix CI by adding version constraints to
dask
andtensorflow
(optuna/optuna-examples#245)
Tests
- Unify the implementation of
_create_frozen_trial()
undertesting
module (#5157) - Remove the Python version constraint for PyTorch (#5278)
Code Fixes
- Fix unused (and unintended) import (optuna/optuna-integration#68)
- Add Dask to
__init__.py
and fix its documentation generation (optuna/optuna-integration#71) - Replace
optuna.integration
withoptuna_integration
in the doc and the issue template (optuna/optuna-integration#73) - Fix the doc for TensorFlow (optuna/optuna-integration#76)
- Add skopt dependency (optuna/optuna-integration#78)
- Fastai readme fix (optuna/optuna-integration#82, thanks @DanielAvdar!)
- Fix
__init__.py
(optuna/optuna-integration#86) - Apply Black 2024 to codebase (optuna/optuna-integration#87)
- Change the order of dependencies by name (optuna/optuna-integration#92)
- Remove the deprecated decorator of
KerasPruningCallback
(optuna/optuna-integration#93) - Remove
UserWarning
bytests/test_keras.py
(optuna/optuna-integration#94) - Refactor
TPESampler
for more clarity before c-TPE integration (#5117) - Fix
Checks(integration)
failure (#5167) - Fix type annotation of logging (#5176)
- Update NamedTuple in
_ParzenEstimatorParameters
to more modern style (#5193) - Apply Black 2024 to codebase (#5252)
- Simplify annotations in
optuna/study/_optimize.py
(#5261, thanks @shahpratham!) - Unify and refactor
plot_timeline
test (#5281)
Continuous Integration
- Remove non oldest and latest Python versions from tests (optuna/optuna-integration#44)
- Fix flake8 failure in CI (optuna/optuna-integration#55)
- Delete workflow dispatch input (optuna/optuna-integration#57)
- Fix default branch (optuna/optuna-integration#58)
- Fix coverage source path (optuna/optuna-integration#60)
- Not use
black 24.*
(optuna/optuna-integration#64) - Simplify integration test (optuna/optuna-integration#95)
- Hotfix the version of
botorch<0.10.
for CI failures (optuna/optuna-integration#96) - Hotfix the CI error by adding version constraint to dask (optuna/optuna-integration#99)
- Fix tests with MPI (#5166)
- Fix Checks (Integration) CI for NumPy 1.23.5 (#5177)
- Add version constraint for black (#5210)
- Skip the reproducibility tests for lightgbm (#5214)
- Fix the errors in mypy for the
Checks (Integration)
CI (#5217) - Add a version constraint for Torch (#5221)
- Hotfix mypy error in integration (#5232)
- Skip
test_reproducible_in_other_process
forGPSampler
with Python 3.12 (#5251) - Add CI settings to test Matplotlib without Plotly (#5263, thanks @DanielAvdar!)
- Unify indent size, two in toml file (#5271)
- Follow up for split integrations (#5277)
- Add a version constraint to
fakeredis
(#5307)
Other
- Bump up version number to 3.6.0.dev (optuna/optuna-integration#53)
- Bump up version number to 3.6.0 (optuna/optuna-integration#100)
- Bump the version up to v3.6.0.dev (#5143)
- Ignore auto generated files by Sphinx (#5192)
- Delete
labeler.yml
to disable thetriage
action (#5240) - Bump up to version number 3.6.0 (#5318)
Thanks to All the Contributors!
This release was made pos...
v3.5.0
This is the release note of v3.5.0.
Highlights
This is a maintenance release with various bug fixes and improvements to the documentation and more.
Breaking Changes
- Isolate the fast.ai module from optuna (optuna/optuna-integration#49, thanks @sousu4!)
- Change
n_objectives
condition to be greater than 4 in candidates functions (#5121, thanks @adjeiv!)
New Features
- Support constraints in plot contour (#4975, thanks @y-kamiya!)
- Support infeasible coloring for plot_timeline (#5014)
- Support
constant_liar
in multi-objectiveTPESampler
(#5021) - Add
optuna study-names
cli (#5029) - Use
ExpectedHypervolumeImprovement
candidates function forBotorchSampler
(#5065, thanks @adjeiv!) - Fix logei_candidates_func in
botorch.py
(#5094, thanks @sousu4!) - Report CV scores from within
OptunaSearchCV
(#5098, thanks @adjeiv!)
Enhancements
- Support
constant_liar
in multi-objectiveTPESampler
(#5021) - Make positional args to kwargs in suggest_int (#5044)
- Ensure n_below is never negative in TPESampler (#5074, thanks @p1kit!)
- Improve visibility of infeasible trials in
plot_contour
(#5107)
Bug Fixes
- Fix random number generator of
NSGAIIChildGenerationStrategy
(#5003) - Return
trials
for above in MO split whenn_below=0
(#5079) - Enable loading of read-only files (#5103, thanks @Guillaume227!)
- Fix
logpdf
for scaledtruncnorm
(#5110) - Fix the bug of matplotlib's plot_rank function (#5133)
Documentation
- Add the table of dependencies in each integration module (#5005)
- Enhance the documentation of
LightGBM
tuner and separatetrain()
from__init__.py
(#5010) - Update link to reference (#5064)
- Update the FAQ on reproducible optimization results to remove note on
HyperbandPruner
(#5075, thanks @felix-cw!) - Remove
MOTPESampler
fromindex.rst
file (#5084, thanks @Ashhar-24!) - Add a note about the deprecation of
MOTPESampler
to the doc (#5086) - Add the TPE tutorial paper to the doc-string (#5096)
- Update
README.md
to fix the installation and integration (#5126) - Clarify that
Recommended budgets
includen_startup_trials
(#5137)
Examples
- Update version syntax for PyTorch and PyTorch Lightning examples (optuna/optuna-examples#205, thanks @JustinGoheen!)
- Update import path (optuna/optuna-examples#213)
- Bump up python versions (optuna/optuna-examples#214)
- Add the simplest example directly to README (optuna/optuna-examples#215)
- Add simples examples for multi-objective and constrained optimizations (optuna/optuna-examples#216)
- Revise the comment to describe the problem (optuna/optuna-examples#217)
- Modify simple examples based on the Optuna code conventions (optuna/optuna-examples#218)
- Remove version specification of
jax
andjaxlib
(optuna/optuna-examples#223) - Import examples from
optuna/optuna-dashboard
(optuna/optuna-examples#224) - Add
OptunaSearchCV
with terminator (optuna/optuna-examples#225) - Drop python 3.8 from haiku test (optuna/optuna-examples#227)
- Run MXNet in Python 3.11 (optuna/optuna-examples#228)
Tests
- Remove tests for allennlp and chainer (optuna/optuna-integration#47)
- Reduce the warning in
tests/study_tests/test_study.py
(#5070, thanks @sousu4!)
Code Fixes
- Implement NSGA-III elite population selection strategy (#5027)
- Fix import path of
PyTorchLightning
(#5028) - Fix
Any
withfloat
in_TreeNode.children
(#5040, thanks @aanghelidi!) - Fix future annotation in
typing.py
(#5054, thanks @jot-s-bindra!) - Add future annotations to callback and terminator files inside terminator folder (#5055, thanks @jot-s-bindra!)
- Fix future annotations to edf python file (#5056, thanks @Vaibhav101203!)
- Fix future annotations in _hypervolume_history.py (#5057, thanks @Vaibhav101203!)
- Reduce the warning in
tests/storages_tests/test_heartbeat.py
(#5066, thanks @sousu4!) - Fix future annotation to
frozen.py
(#5080, thanks @Vaibhav101203!) - Fix annotation for
dataframe.py
(#5081, thanks @Vaibhav101203!) - Fix future annotation (#5083, thanks @Vaibhav101203!)
- Fix type annotation (#5105)
- Fix mypy error in CI (#5106)
- Isolate the fast.ai module (#5120, thanks @sousu4!)
- Clean up workflow file (#5122)
Continuous Integration
- Run
test_tensorflow
in Python 3.11 (optuna/optuna-integration#46) - Exclude mypy checks for chainer (optuna/optuna-integration#48)
- Support Python 3.12 on tests for core modules (#5018)
- Fix the issue where formats.sh does not handle tutorial/ (#5023, thanks @sousu4!)
- Skip slow integration tests (#5033)
- Install PyTorch for CPU on CIs (#5042)
- Remove unused
type: ignore
(#5047) - Reduce
tests-mpi
to the oldest and latest Python versions (#5067) - Add workflow matrices for the tests to reduce GitHub check runtime (#5093)
- Remove the skip of Python 3.11 in
tests-mpi
(#5100) - Downgrade kaleido to 0.1.0post1 for fixing Windows CI (#5101)
- Rename
should-skip
totest-trigger-type
for more clarity (#5134) - Pin the version of PyQt6-Qt6 (#5135)
- Revert
Pin the version of PyQt6-Qt6
(#5140)
Other
- Bump up version to v3.5.0.dev (optuna/optuna-integration#43)
- Bump up version number to 3.5.0 (optuna/optuna-integration#52)
- Bump the version up to v3.5.0.dev (#5032)
- Remove email of authors (#5078)
- Update the integration sections in
README.md
(#5108) - Pin mypy version to 1.6.* (#5123)
- Remove
!examples
from.dockerignore
(#5129)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @Ashhar-24, @Guillaume227, @HideakiImamura, @JustinGoheen, @Vaibhav101203, @aanghelidi, @adjeiv, @c-bata, @contramundum53, @eukaryo, @felix-cw, @gen740, @jot-s-bindra, @keisuke-umezawa, @knshnb, @nabenabe0928, @not522, @nzw0301, @p1kit, @sousu4, @toshihikoyanase, @y-kamiya
v3.4.0
This is the release note of v3.4.0.
Highlights
Optuna 3.4 newly supports the following new features. See our release blog for more detailed information.
- Preferential Optimization (Optuna Dashboard)
- Optuna Artifact
- Jupyter Lab Extension
- VS Code Extension
- User-defined Distance for Categorical Parameters in TPE
- Constrained Optimization Support for Visualization Functions
- User-Defined Plotly’s Figure Support (Optuna Dashboard)
- 3D Model Viewer Support (Optuna Dashboard)
Breaking Changes
New Features
- Support constraints for intermediate values plot (#4851, thanks @adjeiv!)
- Display all objectives on hyperparameter importances plot (#4871)
- Implement
get_all_study_names()
(#4898) - Support constraints
plot_rank
(#4899, thanks @ryota717!) - Support Study Artifacts (#4905)
- Support specifying distance between categorical choices in
TPESampler
(#4926) - Add
metric_names
getter to study (#4930) - Add artifact middleware for exponential backoff retries (#4956)
- Add
GCSArtifactStore
(#4967, thanks @semiexp!) - Add
BestValueStagnationEvaluator
(#4974, thanks @smygw72!) - Allow user-defined objective names in hyperparameter importance plots (#4986)
Enhancements
- CHG constrained param displayed in #cccccc (#4877, thanks @louis-she!)
- Faster implementation of fANOVA (#4897)
- Support constraint in plot slice (#4906, thanks @hrntsm!)
- Add mimetype input (#4910, thanks @hrntsm!)
- Show all ticks in
_parallel_coordinate.py
when log scale (#4911) - Speed up multi-objective TPE (#5017)
Bug Fixes
- Fix numpy indexing bugs and named tuple comparing (#4874, thanks @ryota717!)
- Fix
fail_stale_trials
with race condition (#4886) - Fix alias handler (#4887)
- Add lazy random state and use it in
RandomSampler
(#4970, thanks @shu65!) - Fix TensorBoard error on categorical choices of mixed types (#4973, thanks @ciffelia!)
- Use lazy random state in samplers (#4976, thanks @shu65!)
- Fix an error that does not consider
min_child_samples
(#5007) - Fix
BruteForceSampler
in parallel optimization (#5022)
Documentation
- Fix typo in
_filesystem.py
(#4909) - Mention a pruner instance is not stored in a storage in resuming tutorial (#4927)
- Add introduction of
optuna-fast-fanova
in documents (#4943) - Add artifact tutorial (#4954)
- Fix an example code in
Boto3ArtifactStore
's docstring (#4957) - Add tutorial for
JournalStorage
(#4980, thanks @semiexp!) - Fix document regarding
ArtifactNotFound
(#4982, thanks @smygw72!) - Add the workaround for duplicated samples to FAQ (#5006)
Examples
- Add huggingface's link to external projects (optuna/optuna-examples#201)
- Fix samplers CI (optuna/optuna-examples#202)
- Set version constraint on aim (optuna/optuna-examples#206)
- Add an example of Optuna Terminator for LightGBM (optuna/optuna-examples#210, thanks @hamster-86!)
Tests
- Reduce
n_trials
intest_combination_of_different_distributions_objective
(#4950) - Replaces California housing dataset with iris dataset (#4953)
- Fix numpy duplication warning (#4978, thanks @torotoki!)
- Make test order deterministic for
pytest-xdist
(#4999)
Code Fixes
- Move shap (optuna/optuna-integration#32)
- Remove shap (#4791)
- Use
isinstance
instead ofif type() is ...
(#4896) - Make
cmaes
dependency optional (#4901) - Call internal sampler's
before_trial
(#4914) - Refactor
_grid.py
(#4918) - Fix the
checks-integration
errors on LightGBMTuner (#4923) - Replace deprecated
botorch
method to remove warning (#4940) - Fix type annotation (#4941)
- Add
_split_trials
instead of_get_observation_pairs
and_split_observation_pairs
(#4947) - Use
__future__.annotations
inoptuna/visualization/_optimization_history.py
(#4964, thanks @YuigaWada!) - Fix #4508 for
optuna/visualization/_hypervolume_history.py
(#4965, thanks @RuTiO2le!) - Use future annotation in
optuna/_convert_positional_args.py
(#4966, thanks @hamster-86!) - Fix type annotation of
SQLAlchemy
(#4968) - Use
collections.abc
inoptuna/visualization/_edf.py
(#4969, thanks @g-tamaki!) - Use
collections.abc
in plot pareto front (#4971) - Remove
experimental_func
frommetric_names
property (#4983, thanks @semiexp!) - Add
__future__.annotations
toprogress_bar.py
(#4992) - Fix annotations in
optuna/optuna/visualization/matplotlib/_optimization_history.py
(#5015, thanks @sousu4!)
Continuous Integration
- Fix checks integration (#4869)
- Remove fakeredis version constraint (#4873)
- Support
asv
0.6.0 (#4882) - Fix speed-benchmarks CI (#4903)
- Fix Tests (MPI) CI (#4904)
- Fix xgboost pruning callback (#4921)
- Enhance speed benchmark (#4981, thanks @g-tamaki!)
- Drop Python 3.7 on
tests-mpi
(#4998) - Remove Python 3.7 from the development docker image build (#5009)
- Use CPU version of PyTorch in Docker image (#5019)
Other
- Bump up version number to v3.4.0.dev (optuna/optuna-integration#37)
- Update python shield in
README.md
(optuna/optuna-integration#39) - Replace deprecated mypy option (optuna/optuna-integration#40)
- Bump up version to v3.4.0 (optuna/optuna-integration#42)
- Bump the version up to v3.4.0.dev (#4861)
- Use OIDC (#4867)
- Add
FUNDING.yml
(#4912) - Update
optional-dependencies
and document deselecting integration tests inCONTRIBUTING.md
(#4962) - Bump the version up to v3.4.0 (#5031)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @HideakiImamura, @RuTiO2le, @YuigaWada, @adjeiv, @c-bata, @ciffelia, @contramundum53, @cross32768, @eukaryo, @g-tamaki, @g-votte, @gen740, @hamster-86, @hrntsm, @hvy, @keisuke-umezawa, @knshnb, @lucasmrdt, @louis-she, @moririn2528, @nabenabe0928, @not522, @nzw0301, @ryota717, @semiexp, @shu65, @smygw72, @sousu4, @torotoki, @toshihikoyanase, @xadrianzetx
v3.3.0
This is the release note of v3.3.0.
Highlights
CMA-ES with Learning Rate Adaptation
A new variant of CMA-ES has been added. By setting the lr_adapt
argument to True
in CmaEsSampler
, you can utilize it. For multimodal and/or noisy problems, adapting the learning rate can help avoid getting trapped in local optima. For more details, please refer to #4817. We want to thank @nomuramasahir0, one of the authors of LRA-CMA-ES, for his great work and the development of cmaes library.
Hypervolume History Plot for Multiobjective Optimization
In multiobjective optimization, the history of hypervolume is commonly used as an indicator of performance. Optuna now supports this feature in the visualization module. Thanks to @y0z for your great work!
Constrained Optimization Support for Visualization Functions
Plotly | matplotlib |
---|---|
Some samplers support constrained optimization, however, many other features cannot handle it. We are continuously enhancing support for constraints. In this release, plot_optimization_history
starts to consider constraint violations. Thanks to @hrntsm for your great work!
import optuna
def objective(trial):
x = trial.suggest_float("x", -15, 30)
y = trial.suggest_float("y", -15, 30)
v0 = 4 * x**2 + 4 * y**2
trial.set_user_attr("constraint", [1000 - v0])
return v0
def constraints_func(trial):
return trial.user_attrs["constraint"]
sampler = optuna.samplers.TPESampler(constraints_func=constraints_func)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
fig = optuna.visualization.plot_optimization_history(study)
fig.show()
Streamlit Integration for Human-in-the-loop Optimization
Optuna Dashboard v0.11.0 provides the tight integration with Streamlit framework. By using this feature, you can create your own application for human-in-the-loop optimization. Please check out the documentation and the example for details.
Breaking Changes
- Move mxnet (optuna/optuna-integration#31)
- Remove mxnet (#4790)
- Remove
ordered_dict
argument fromIntersectionSearchSpace
(#4846)
New Features
- Add
logei_candidate_func
and make it default when available (#4667) - Support
JournalFileStorage
andJournalRedisStorage
on CLI (#4696) - Implement hypervolume history plot for matplotlib backend (#4748, thanks @y0z!)
- Add
cv_results_
toOptunaSearchCV
(#4751, thanks @jckkvs!) - Add
optuna.integration.botorch.qnei_candidates_func
(#4753, thanks @kstoneriv3!) - Add hypervolume history plot for
plotly
backend (#4757, thanks @y0z!) - Add
FileSystemArtifactStore
(#4763) - Sort params on fetch (#4775)
- Add constraints support to
_optimization_history_plot
(#4793, thanks @hrntsm!) - Bump up
LightGBM
version to v4.0.0 (#4810) - Add constraints support to
matplotlib._optimization_history_plot
(#4816, thanks @hrntsm!) - Introduce CMA-ES with Learning Rate Adaptation (#4817)
- Add
upload_artifact
api (#4823) - Add
before_trial
(#4825) - Add
Boto3ArtifactStore
(#4840) - Display best objective value in contour plot for a given param pair, not the value from the most recent trial (#4848)
Enhancements
- Speed up
logpdf
in_truncnorm.py
(#4712) - Speed up
erf
(#4713) - Speed up
get_all_trials
inInMemoryStorage
(#4716) - Add a warning for a progress bar not being displayed #4679 (#4728, thanks @rishabsinghh!)
- Make
BruteForceSampler
consider failed trials (#4747) - Use shallow copy in
_get_latest_trial
(#4774) - Speed up
plot_hypervolume_history
(#4776)
Bug Fixes
- Solve issue #4557 - error_score (#4642, thanks @jckkvs!)
- Fix
BruteForceSampler
for pruned trials (#4720) - Fix
plot_slice
bug when some of the choices are numeric (#4724) - Make
LightGBMTuner
reproducible (#4795)
Installation
- Bump up python version (optuna/optuna-integration#34)
Documentation
- Remove
jquery-extension
(#4691) - Add FAQ on combinatorial search space (#4723)
- Fix docs (#4732)
- Add
plot_rank
andplot_timeline
plots to visualization tutorial (#4735) - Fix typos found in
integration/sklearn.py
(#4745) - Remove
study.n_objectives
from document (#4796) - Add lower version constraint for
sphinx_rtd_theme
(#4853) - Artifact docs (#4855)
Examples
- Run DaskML example with Python 3.11 (optuna/optuna-examples#188)
- Show more information in terminator examples (optuna/optuna-examples#192)
- Drop support for Python 3.7 on Haiku (optuna/optuna-examples#198)
- Add
LICENSE
file (optuna/optuna-examples#200)
Tests
- Remove unnecessary
pytestmark
(optuna/optuna-integration#29) - Add
GridSampler
test for failed trials (#4721) - Follow up PR #4642 by adding a unit test to confirm
OptunaSearchCV
behavior (#4758) - Fix
test_log_gass_mass
with SciPy 1.11.0 (#4766) - Fix Pytorch lightning unit test (#4780)
- Remove skopt (#4792)
- Rename test directory (#4839)
Code Fixes
- Simplify the type annotations in
benchmarks
(#4703, thanks @caprest!) - Unify sampling implementation in
TPESampler
(#4717) - Get values after
_get_observation_pairs
(#4742) - Remove unnecessary period (#4746)
- Handle deprecated argument
early_stopping_rounds
(#4752) - Separate dominate function from
_fast_non_dominated_sort()
(#4759) - Separate
after_trial
strategy (#4760) - Remove unused attributes in
TPESampler
(#4769) - Remove
pkg_resources
(#4770) - Use trials as argument of
_calculate_weights_below_for_multi_objective
(#4773) - Fix type annotation (#4797, thanks @taniokay!)
- Follow up separation of after trial strategy (#4803)
- Loose coupling nsgaii child generation (#4806)
- Remove
_study_id
parameter fromTrial
class (#4811, thanks @adjeiv!) - Loose coupling nsgaii elite population selection (#4821)
- Fix checks integration (#4826)
- Remove
OrderedDict
(#4838, thanks @taniokay!) - Fix typo (#4842, thanks @wouterzwerink!)
- Followup child generation strategy (#4856)
- Remove
samplers._search_space.IntersectionSearchSpace
(#4857) - Add experimental decorators to artifacts functionalities (#4858)
Continuous Integration
- Output dependency tree (optuna/optuna-integration#9)
- Use OIDC (optuna/optuna-integration#33)
- Drop Python 3.7 support (optuna/optuna-integration#35)
- Enhance speed benchmark for storages (#4778)
- Drop Python 3.7 on
tests-integration
(#4784) - Remove unused
type:ignore
s (#4787) - Restrict numpy version < 1.24 (#4788)
- Upgrade redis version (#4805)
- Add version constraints on LightGBM (#4807)
- Follow-up #4807 : Fix windows-tests and mac-tests (#4809)
- Support 3.11 integration (#4820)
- Support flake8 6.1.0 (#4847)
Other
- Bump up version number to 3.3.0dev (optuna/optuna-integration#27)
- Bump up version number to 3.3.0 (optuna/optuna-integration#36)
- Bump up version number to 3.3.0dev (#4710)
- Bump the version up to v3.3.0 (#4860)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @HideakiImamura, @adjeiv, @c-bata, @caprest, @contramundum53, @cross32768, @eukaryo, @gen740, @hrntsm, @jckkvs, @knshnb, @kstoneriv3, @nomuramasahir0, @not522, @nzw0301, @rishabsinghh, @taniokay, @toshihikoyanase, @wouterzwerink, @xadrianzetx, @y0z
v3.2.0
This is the release note of v3.2.0.
Highlights
Human-in-the-loop optimization
With the latest release, we have incorporated support for human-in-the-loop optimization. It enables an interactive optimization process between users and the optimization algorithm. As a result, it opens up new opportunities for the application of Optuna in tuning Generative AI. For further details, please check out our human-in-the-loop optimization tutorial.
Overview of human-in-the-loop optimization. Generated images and sounds are displayed on Optuna Dashboard, and users can directly evaluate them there.
Automatic optimization terminator(Optuna Terminator)
Optuna Terminator is a new feature that quantitatively estimates room for optimization and automatically stops the optimization process. It is designed to alleviate the burden of figuring out an appropriate value for the number of trials (n_trials
), or unnecessarily consuming computational resources by indefinitely running the optimization loop. See #4398 and optuna-examples#190.
Transition of estimated room for improvement. It steadily decreases towards the level of cross-validation errors.
New sampling algorithms
NSGA-III for many-objective optimization
We've introduced the NSGAIIISampler as a new multi-objective optimization sampler. It implements NSGA-III, which is an extended variant of NSGA-II, designed to efficiently optimize even when the dimensionality of the objective values is large (especially when it's four or more). NSGA-II had an issue where the search would become biased towards specific regions when the dimensionality of the objective values exceeded four. In NSGA-III, the algorithm is designed to distribute the points more uniformly. This feature was introduced by #4436.
Objective value space for multi-objective optimization (minimization problem). Red points represent Pareto solutions found by NSGA-II. Blue points represent those found by NSGA-III. NSGA-II shows a tendency for points to concentrate towards each axis (corresponding to the ends of the Pareto Front). On the other hand, NSGA-III displays a wider distribution across the Pareto Front.
BI-population CMA-ES
Continuing from v3.1, significant improvements have been made to the CMA-ES Sampler. As a new feature, we've added the BI-population CMA-ES algorithm, a kind of restart strategy that mitigates the problem of falling into local optima. Whether the IPOP CMA-ES, which we've been providing so far, or the new BI-population CMA-ES is better depends on the problems. If you're struggling with local optima, please try BI-population CMA-ES as well. For more details, please see #4464.
New visualization functions
Timeline plot for trial life cycle
The timeline plot visualizes the progress (status, start and end times) of each trial. In this plot, the horizontal axis represents time, and trials are plotted in the vertical direction. Each trial is represented as a horizontal bar, drawn from the start to the end of the trial. With this plot, you can quickly get an understanding of the overall progress of the optimization experiment, such as whether parallel optimization is progressing properly or if there are any trials taking an unusually long time.
Similar to other plot functions, all you need to do is pass the study object to plot_timeline
. For more details, please refer to #4470 and #4538.
Rank plot to understand input-output relationship
A new visualization feature, plot_rank
, has been introduced. This plot provides valuable insights into landscapes of objective functions, i.e., relationship between parameters and objective values. In this plot, the vertical and horizontal axes represent the parameter values, and each point represents a single trial. The points are colored according to their ranks.
Similar to other plot functions, all you need to do is pass the study object to plot_rank. For more details, please refer to #4427 and #4541.
Isolating integration modules
We have separated Optuna's integration module into a different package called optuna-integration. Maintaining many integrations within the Optuna package was becoming costly. By separating the integration module, we aim to improve the development speed of both Optuna itself and its integration module. As of the release of v3.2, we have migrated six integration modules: allennlp, catalyst, chainer, keras, skorch, and tensorflow (excepting for the TensorBoard integration). To use integration module, pip install optuna-integration
will be necessary. See #4484.
- Move
chainermn
integration (optuna/optuna-integration#1) - Move
integration/keras.py
(optuna/optuna-integration#5) - Move
integration/allennlp
(optuna/optuna-integration#8) - Move Catalyst (optuna/optuna-integration#19)
- Move
tf.keras
integration (optuna/optuna-integration#21) - Move
skorch
(optuna/optuna-integration#22) - Move
tensorflow
integration (optuna/optuna-integration#23) - Partially follow
sklearn.model_selection.GridSearchCV
's arguments (#4336) - Delete
optuna.integration.ChainerPruningExtension
for migrating to optuna-integration package (#4370) - Delete
optuna.integration.ChainerMNStudy
for migrating to optuna-integration package (#4497) - Delete
optuna.integration.KerasPruningCallback
for migration to optuna-integration (#4558) - Delete
AllenNLP
integration for migration to optuna-integration (#4579) - DeleteCatalyst integration for migration to optuna-integration (#4644)
- Remove
tf.keras
integration (#4662) - Delete
skorch
integration for migration to optuna-integration (#4663) - Remove
tensorflow
integration (#4666)
Starting support for Mac & Windows
We have started supporting Optuna on Mac and Windows. While many features already worked in previous versions, we have fixed issues that arose in certain modules, such as Storage. See #4457 and #4458.
Breaking Changes
- Update deletion timing of
system_attrs
andset_system_attr
(optuna/optuna-integration#4) - Change deletion timing of
system_attrs
andset_system_attr
(#4550)
New Features
- Show custom objective names for multi-objective optimization (#4383)
- Support DDP in
PyTorch-Lightning
(#4384) - Implement the evaluator of regret bounds and its GP backend for Optuna Terminator 🤖 (#4401)
- Implement the termination logic and APIs of Optuna Terminator 🤖 (#4405)
- Add rank plot (#4427)
- Implement NSGA-III (#4436)
- Add BIPOP-CMA-ES support in
CmaEsSampler
(#4464) - Add timeline plot with plotly as backend (#4470)
- Move
optuna.samplers._search_space.intersection.py
tooptuna.search_space.intersection.py
(#4505) - Add timeline plot with matplotlib as backend (#4538)
- Add rank plot matplotlib version (#4541)
- Support batched sampling with BoTorch (#4591, thanks @kstoneriv3!)
- Add
plot_terminator_improvement
as visualization ofoptuna.terminator
(#4609) - Add import for public API of
optuna.terminator
tooptuna/terminator/__init__.py
(#4669) - Add matplotlib version of
plot_terminator_improvement
(#4701)
Enhancements
- Import
cmaes
package lazily (#4394) - Make
BruteForceSampler
stateless (#4408) - Sort studies by study_id (#4414)
- Add index study_id column on trials table (#4449, thanks @Ilevk!)
- Cache all trials in Study with delayed relative sampling (#4468)
- Avoid error at import time for
optuna.terminator.improvement.gp.botorch
(#4483) - Avoid standardizing
Yvar
in_BoTorchGaussianProcess
(#4488) - Change the noise value in
_BoTorchGaussianProcess
to suppress warning messages (#4510) - Change the argument of
intersection_search_space
fromstudy
totrials
(#4514) - Improve deprecated messages in the old suggest functions (#4562)
- Add support for
distributed>=2023.3.2
(#4589, thanks @jrbourbeau!) - Fix
plot_rank
marker lines (#4602) - Sync owned trials when calling
study.ask
andstudy.get_trials
(#4631) - Ensure that the plotly version of timeline plot draws a legend even if all TrialStates are the same (#4635)
Bug Fixes
- Fix
botorch
dependency (#4368) - Mitigate a blocking issue while running migrations with SQLAlchemy 2.0 (#4386)
- Fix
colorlog
compatibility problem (#4406) - Validate length of values in
add_trial
(#4416) - Fix
RDBStorage.get_best_trial
when there areinf
s (#4422) - Fix bug of CMA-ES with margin on
RDBStorage
orJournalStorage
(#4434) - Fix CMA-ES Sampler (#4443)
- Fix
param_mask
for multivariate TPE withconstant_liar
(#4462) - Make `QMCSample...
v3.1.1
This is the release note of v3.1.1.
Enhancements
- [Backport] Import
cmaes
package lazily (#4573)
Bug Fixes
- [Backport] Fix botorch dependency (#4569)
- [Backport] Fix param_mask for multivariate TPE with constant_liar (#4570)
- [Backport] Mitigate a blocking issue while running migrations with SQLAlchemy 2.0 (#4571)
- [Backport] Fix bug of CMA-ES with margin on
RDBStorage
orJournalStorage
(#4572) - [Backport] Fix RDBStorage.get_best_trial when there are
inf
s (#4574) - [Backport] Fix CMA-ES Sampler (#4581)
Code Fixes
- [Backport] Add
types-tqdm
for lint (#4566)
Other
- Update version number to v3.1.1 (#4567)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
v3.0.6
v3.1.0
This is the release note of v3.1.0.
This is not something you have to read from top to bottom to learn about the summary of Optuna v3.1. The recommended way is reading the release blog.
Highlights
New Features
CMA-ES with Margin
CMA-ES CMA-ES with Margin “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”
CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler
, which makes it more efficient in these cases. You can see the benchmark results here. For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.
import optuna
from optuna.samplers import CmaEsSampler
def objective(trial):
x = trial.suggest_float("y", -10, 10, step=0.1)
y = trial.suggest_int("x", -100, 100)
return x**2 + y
study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)
Distributed Optimization via NFS
JournalFileStorage
, a file storage backend based on JournalStorage
, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. #815, #1330, #1457 and #2216).
import optuna
from optuna.storages import JournalStorage, JournalFileStorage
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_float("y", -100, 100)
return x**2 + y
storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
For more information on JournalFileStorage
, see the blog post “Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage” written by @wattlebirdaz.
A Brand-New Redis Storage
We have replaced the Redis storage backend with a JournalStorage
-based one. The experimental RedisStorage
class has been removed in v3.1. The following example shows how to use the new JournalRedisStorage
class.
import optuna
from optuna.storages import JournalStorage, JournalRedisStorage
def objective(trial):
…
storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
Dask.distributed Integration
DaskStorage
, a new storage backend based on Dask.distributed, is supported. It allows you to leverage distributed capabilities in similar APIs with concurrent.futures
. DaskStorage
can be used with InMemoryStorage
, so you don't need to set up a database server. Here's a code example showing how to use DaskStorage
:
import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait
def objective(trial):
...
with Client("192.168.1.8:8686") as client:
study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
futures = [
client.submit(study.optimize, objective, n_trials=10, pure=False)
for i in range(10)
]
wait(futures)
print(f"Best params: {study.best_params}")
Setting up a Dask cluster is easy: install dask
and distributed
, then run the dask scheduler
and dask worker
commands, as detailed in the Quick Start Guide in the Dask.distributed documentation.
$ pip install optuna dask distributed
$ dark scheduler
INFO - Scheduler at: tcp://192.168.1.8:8686
INFO - Dashboard at: :8687
…
$ dask worker tcp://192.168.1.8:8686
$ dask worker tcp://192.168.1.8:8686
$ dask worker tcp://192.168.1.8:8686
See the documentation for more information.
Brute-force Sampler
BruteForceSampler
, a new sampler for brute-force search, tries all combinations of parameters. In contrast to GridSampler
, it does not require passing the search space as an argument and works even with branches. This sampler constructs the search space with the define-by-run style, so it works by just adding sampler=optuna.samplers.BruteForceSampler()
.
import optuna
def objective(trial):
c = trial.suggest_categorical("c", ["float", "int"])
if c == "float":
return trial.suggest_float("x", 1, 3, step=0.5)
elif c == "int":
a = trial.suggest_int("a", 1, 3)
b = trial.suggest_int("b", a, 3)
return a + b
study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
study.optimize(objective)
Other Improvements
Bug Fix for TPE’s constant_liar
Option
The constant_liar
option of TPESampler
is an option for the distributed optimization or batch optimization. It has been introduced in v2.8.0, but suffers from performance degradation in specific situations. In this release, we have detected the cause of the problem, and resolve it with fruitful performance verification. See #4073 for more details.
Make Scipy Dependency Optional
50% time of import optuna
is consumed by SciPy-related modules. Also, it consumes 110MB of storage space, which is really problematic in environments with limited resources such as serverless computing.
We decided to implement scientific functions on our own to make the SciPy dependency optional. Thanks to contributors' effort on performance optimization, our implementation is as fast as the code with SciPy although ours is written in pure Python. See #4105 for more information.
Note that QMCSampler
still depends on SciPy. If you use QMCSampler
, please explicitly specify SciPy as your dependency.
The New UI for Optuna Dashboard
We are developing a new UI for Optuna Dashboard that is available as an opt-in feature from the beta release - simply launch the dashboard as usual and click the link to the new UI. Please try it out and share your thoughts with us.
$ pip install "optuna-dashboard>=0.9.0b2"
Feedback Survey: The New UI for Optuna Dashboard
Change Supported Python Versions
We have changed the supported Python versions. Specifically, Python 3.6 has been removed from the supported versions and Python 3.11 has been added. See #3021 and #3964 for more details.
Breaking Changes
- Allow users to call
study.optimize()
in multiple threads (#4068) - Use all trials in
TPESampler
even whenmultivariate=True
(#4079) - Drop Python 3.6 (#4150)
- Remove
RedisStorage
(#4156) - Deprecate
set_system_attr
inStudy
andTrial
(#4188) - Add a
directions
arg tostorage.create_new_study
(#4189) - Deprecate
system_attrs
inStudy
class (#4250) - Deprecate
Trial.system_attrs
property method (#4264) - Remove
device
argument ofTorchDistributedTrial
(#4266)
New Features
- Add Dask integration (#2023, thanks @jrbourbeau!)
- Add journal-style log storage (#3854)
- Support CMA-ES with margin in
CmaEsSampler
(#4016) - Add journal redis storage (#4086)
- Add device argument to
BoTorchSampler
(#4101) - Add the feature to
JournalStorage
of Redis backend to resume from a snapshot (#4102) TorchDistributedTrial
usesgroup
as parameter instead ofdevice
(#4106, thanks @reyoung!)- Added
user_attrs
to print by Optuna studies incli.py
(#4129, thanks @gonzaload!) - Add
BruteForceSampler
(#4132, thanks @semiexp!) - Add
__getstate__
and__setstate__
toRedisStorage
(#4135, thanks @shu65!) - Make journal redis storage picklable (#4139, thanks @shu65!)
- Support for
qNoisyExpectedHypervolumeImprovement
acquisition function from Botorch (Issue#4014) (#4186) - Show best trial number and value in progress bar (#4205)
Enhancements
- Change the log message format for failed trials (#3857, thanks @erentknn!)
- Move default logic of
get_trial_id_from_study_id_trial_number()
method toBaseStorage
(#3910) - Fix the data migration script for v3 release (#4020)
- Convert
search_space
values ofGridSampler
explicitly (#4062) - Add single exception catch to study
optimize
(#4098) - Remove scipy dependencies from
TPESampler
(#4105) - Add validation in
enqueue_trial
(#4126) - Speed up `tests/samplers_tests/test_nsgaii.py::test_fast_non_dominated_sort_with...
v3.1.0-b0
This is the release note of v3.1.0-b0.
Highlights
CMA-ES with Margin support
CMA-ES CMA-ES with Margin “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”
CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler, which makes it more efficient in these cases. You can see the benchmark results here. For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.
import optuna
from optuna.samplers import CmaEsSampler
def objective(trial):
x = trial.suggest_float("y", -10, 10, step=0.1)
y = trial.suggest_int("x", -100, 100)
return x**2 + y
study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)
Distributed Optimization via NFS
JournalFileStorage
, a file storage backend based on JournalStorage
, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. #815, #1330, #1457 and #2216).
import optuna
from optuna.storages import JournalStorage, JournalFileStorage
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_float("y", -100, 100)
return x**2 + y
storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
For more information on JournalFileStorage
, see the blog post “Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage” written by @wattlebirdaz.
Dask Integration
DaskStorage
, a new storage backend based on Dask.distributed, is supported. It enables distributed computing in similar APIs with concurrent.futures
. An example code is like the following (The full example code is available in the optuna-examples repository).
import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait
def objective(trial):
...
with Client("192.168.1.8:8686") as client:
study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
futures = [
client.submit(study.optimize, objective, n_trials=10, pure=False)
for i in range(10)
]
wait(futures)
print(f"Best params: {study.best_params}")
One of the interesting aspects is the availability of InMemoryStorage
. You don’t need to set up database servers for distributed optimization. Although you still need to set up the Dask.distributed cluster, it’s quite easy like the following. See Quickstart of the Dask.distributed documentation for more details.
$ pip install optuna dask distributed
$ dark-scheduler
INFO - Scheduler at: tcp://192.168.1.8:8686
INFO - Dashboard at: :8687
…
$ dask-worker tcp://192.168.1.8:8686
$ dask-worker tcp://192.168.1.8:8686
$ dask-worker tcp://192.168.1.8:8686
$ python dask_simple.py
A brand-new Redis storage
We have replaced the Redis storage backend with a JournalStorage-based one. The experimental RedisStorage
class has been removed in v3.1. The following example shows how to use the new JournalRedisStorage
class.
import optuna
from optuna.storages import JournalStorage, JournalRedisStorage
def objective(trial):
…
storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
Sampler for brute-force search
BruteForceSampler
, a new sampler for brute-force search, tries all combinations of parameters. In contrast to GridSampler
, it does not require passing the search space as an argument and works even with branches. This sampler constructs the search space with the define-by-run style, so it works by just adding sampler=optuna.samplers.BruteForceSampler()
.
import optuna
def objective(trial):
c = trial.suggest_categorical("c", ["float", "int"])
if c == "float":
return trial.suggest_float("x", 1, 3, step=0.5)
elif c == "int":
a = trial.suggest_int("a", 1, 3)
b = trial.suggest_int("b", a, 3)
return a + b
study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
study.optimize(objective)
Breaking Changes
- Allow users to call
study.optimize()
in multiple threads (#4068) - Use all trials in
TPESampler
even whenmultivariate=True
(#4079) - Drop Python 3.6 (#4150)
- Remove
RedisStorage
(#4156) - Deprecate
set_system_attr
inStudy
andTrial
(#4188) - Deprecate
system_attrs
inStudy
class (#4250)
New Features
- Add Dask integration (#2023, thanks @jrbourbeau!)
- Add journal-style log storage (#3854)
- Support CMA-ES with margin in
CmaEsSampler
(#4016) - Add journal redis storage (#4086)
- Add device argument to
BoTorchSampler
(#4101) - Add the feature to
JournalStorage
of Redis backend to resume from a snapshot (#4102) - Added
user_attrs
to print by optuna studies incli.py
(#4129, thanks @gonzaload!) - Add
BruteForceSampler
(#4132, thanks @semiexp!) - Add
__getstate__
and__setstate__
toRedisStorage
(#4135, thanks @shu65!) - Support pickle in
JournalRedisStorage
(#4139, thanks @shu65!) - Support for
qNoisyExpectedHypervolumeImprovement
acquisition function fromBoTorch
(Issue#4014) (#4186)
Enhancements
- Change the log message format for failed trials (#3857, thanks @erentknn!)
- Move default logic of
get_trial_id_from_study_id_trial_number()
method to BaseStorage (#3910) - Fix the data migration script for v3 release (#4020)
- Convert
search_space
values ofGridSampler
explicitly (#4062) - Add single exception catch to study optimize (#4098)
- Add validation in
enqueue_trial
(#4126) - Speed up
tests/samplers_tests/test_nsgaii.py::test_fast_non_dominated_sort_with_constraints
(#4128, thanks @mist714!) - Add getstate and setstate to journal storage (#4130, thanks @shu65!)
- Support
None
in slice plot (#4133, thanks @belldandyxtq!) - Add marker to matplotlib
plot_intermediate_value
(#4134, thanks @belldandyxtq!) - Cache
study.directions
to reduce the number ofget_study_directions()
calls (#4146) - Add an in-memory cache in
Trial
class (#4240)
Bug Fixes
- Fix infinite loop bug in
TPESampler
(#3953, thanks @gasin!) - Fix
GridSampler
(#3957) - Fix an import error of
sqlalchemy.orm.declarative_base
(#3967) - Skip to add
intermediate_value_type
andvalue_type
columns if exists (#4015) - Fix duplicated sampling of
SkoptSampler
(#4023) - Avoid parse errors of
datetime.isoformat
strings (#4025) - Fix a concurrency bug of JournalStorage
set_trial_state_values
(#4033) - Specify object type to numpy array init to avoid unintended str cast (#4035)
- Make
TPESampler
reproducible (#4056) - Fix bugs in
constant_liar
option (#4073) - Add a flush to
JournalFileStorage.append_logs
(#4076) - Add a lock to
MLflowCallback
(#4097) - Reject deprecated distributions in
OptunaSearchCV
(#4120) - Stop using hash function in
_get_bracket_id
inHyperbandPruner
(#4131, thanks @zaburo-ch!) - Validation for the parameter enqueued in
to_internal_repr
ofFloatDistribution
andIntDistribution
(#4137) - Fix
PartialFixedSampler
to handleNone
correctly (#4147, thanks @halucinor!) - Fix the bug of JournalFileStorage on Windows (#4151)
- Fix CmaEs system attribution key (#4184)
Installation
- Replace
thop
withfvcore
(#3906) - Use the latest stable scipy (#3959, thanks @gasin!)
- Remove GPyTorch version constraint (#3986)
- Make typing_extensions optional (#3990)
- Add version constraint on
importlib-metadata
(#4036) - Add a version constraint of
matplotlib
(#4044)
Documentation
- Update cli tutorial (#3902)
- Replace
thop
withfvcore
(#3906) - Slightly improve docs of
FrozenTrial
(#3943) - Refine docs in
BaseStorage
(#3948) - Remove "Edit on GitHub" button from readthedocs (#3952)
- Mention restoring sampler in saving/resuming tutorial (#3992)
- Use
log_loss
instead of deprecatedlog
sincesklearn
1.1 (#3993) - Fix script path in benchmarks/README.md (#4021)
- Ignore
ConvergenceWarning
in the ask-and-tell tutorial (#4032) - Update docs to let users know the concurrency problem on SQLite3 (#4034)
- Fix the time complexity of
NSGAIISampler
(#4045) - Fix sampler comparison table (#4082)
- Add
BruteForceSampler
in the samplers' list (#4152) - Remove markup from NaN in FAQ (#4155)
- Remove the document of the
multi_objective
module (#4167) - Fix a typo in
QMCSampler
(#4179) - Introduce Optuna Dashboard in tutorial docs (#4226)
- Remove...