From 90a3f8878e64ab6342ea516a163c80277738fde4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Tue, 25 Nov 2025 04:18:39 +0200 Subject: [PATCH 1/8] Fix typo in OptimizationLBFGSB docs Co-authored-by: Claude --- docs/src/getting_started.md | 2 +- docs/src/tutorials/certification.md | 2 +- docs/src/tutorials/remakecomposition.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/src/getting_started.md b/docs/src/getting_started.md index 266c61a60..0c4164824 100644 --- a/docs/src/getting_started.md +++ b/docs/src/getting_started.md @@ -22,7 +22,7 @@ p = [1.0, 100.0] optf = OptimizationFunction(rosenbrock, AutoZygote()) prob = OptimizationProblem(optf, u0, p) -sol = solve(prob, OptimizationLBFGSB.LBFGS()) +sol = solve(prob, OptimizationLBFGSB.LBFGSB()) ``` ```@example intro diff --git a/docs/src/tutorials/certification.md b/docs/src/tutorials/certification.md index 133728c21..356e10c02 100644 --- a/docs/src/tutorials/certification.md +++ b/docs/src/tutorials/certification.md @@ -16,7 +16,7 @@ end optf = OptimizationFunction(f, Optimization.AutoForwardDiff()) prob = OptimizationProblem(optf, [0.4], structural_analysis = true) -sol = solve(prob, LBFGS(), maxiters = 1000) +sol = solve(prob, OptimizationLBFGSB.LBFGSB(), maxiters = 1000) ``` The result can be accessed as the `analysis_results` field of the solution. diff --git a/docs/src/tutorials/remakecomposition.md b/docs/src/tutorials/remakecomposition.md index edeb79977..b46743d4c 100644 --- a/docs/src/tutorials/remakecomposition.md +++ b/docs/src/tutorials/remakecomposition.md @@ -47,7 +47,7 @@ This is a good start can we converge to the global optimum? ```@example polyalg prob = remake(prob, u0 = res1.minimizer) -res2 = solve(prob, LBFGS(), maxiters = 100) +res2 = solve(prob, OptimizationLBFGSB.LBFGSB(), maxiters = 100) @show res2.objective ``` From 3272807b86ea6e9ed4a510e6f593d7812e0af6f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Tue, 25 Nov 2025 04:20:11 +0200 Subject: [PATCH 2/8] Split docs for LBFGSB and Sophia Co-authored-by: Claude --- docs/src/optimization_packages/lbfgsb.md | 52 ++++++++++++ .../src/optimization_packages/optimization.md | 80 ++----------------- docs/src/optimization_packages/sophia.md | 52 ++++++++++++ 3 files changed, 112 insertions(+), 72 deletions(-) create mode 100644 docs/src/optimization_packages/lbfgsb.md create mode 100644 docs/src/optimization_packages/sophia.md diff --git a/docs/src/optimization_packages/lbfgsb.md b/docs/src/optimization_packages/lbfgsb.md new file mode 100644 index 000000000..19c627b18 --- /dev/null +++ b/docs/src/optimization_packages/lbfgsb.md @@ -0,0 +1,52 @@ +# OptimizationLBFGSB.jl + +[`OptimizationLBFGSB.jl`](https://github.com/SciML/Optimization.jl/tree/master/lib/OptimizationLBFGSB) is a package that wraps the [L-BFGS-B](https://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html) fortran routine via the [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl/) package. + +## Installation + +To use this package, install the `OptimizationLBFGSB` package: + +```julia +using Pkg +Pkg.add("OptimizationLBFGSB") +``` + +## Methods + + - `LBFGSB`: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. It directly supports box-constraints. + + This can also handle arbitrary non-linear constraints through an Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver. + +```@docs +OptimizationLBFGSB.LBFGSB +``` + +## Examples + +### Unconstrained rosenbrock problem + +```@example LBFGSB +using OptimizationBase, OptimizationLBFGSB, ADTypes, Zygote + +rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 +x0 = zeros(2) +p = [1.0, 100.0] + +optf = OptimizationFunction(rosenbrock, ADTypes.AutoZygote()) +prob = OptimizationProblem(optf, x0, p) +sol = solve(prob, LBFGSB()) +``` + +### With nonlinear and bounds constraints + +```@example LBFGSB +function con2_c(res, x, p) + res .= [x[1]^2 + x[2]^2, (x[2] * sin(x[1]) + x[1]) - 5] +end + +optf = OptimizationFunction(rosenbrock, ADTypes.AutoZygote(), cons = con2_c) +prob = OptimizationProblem(optf, x0, p, lcons = [1.0, -Inf], + ucons = [1.0, 0.0], lb = [-1.0, -1.0], + ub = [1.0, 1.0]) +res = solve(prob, LBFGSB(), maxiters = 100) +``` diff --git a/docs/src/optimization_packages/optimization.md b/docs/src/optimization_packages/optimization.md index 03c3381b2..34b55aad6 100644 --- a/docs/src/optimization_packages/optimization.md +++ b/docs/src/optimization_packages/optimization.md @@ -1,78 +1,14 @@ # Optimization.jl -There are some solvers that are available in the Optimization.jl package directly without the need to install any of the solver wrappers. +The Optimization.jl package provides the common interface for defining and solving optimization problems. All optimization solvers are provided through separate wrapper packages that need to be installed independently. -## Methods +For a list of available solver packages, see the other pages in this section of the documentation. - - `LBFGS`: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. Through a wrapper over the [L-BFGS-B](https://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html) fortran routine accessed from the [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl/) package. It directly supports box-constraints. +Some commonly used solver packages include: - This can also handle arbitrary non-linear constraints through a Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver available directly in Optimization.jl. +- [OptimizationLBFGSB.jl](@ref lbfgsb) - L-BFGS-B quasi-Newton method with box constraints +- [OptimizationOptimJL.jl](@ref optim) - Wrappers for Optim.jl solvers +- [OptimizationMOI.jl](@ref mathoptinterface) - MathOptInterface solvers +- [OptimizationSophia.jl](@ref sophia) - Sophia optimizer for neural network training -```@docs -Optimization.Sophia -``` - -## Examples - -### Unconstrained rosenbrock problem - -```@example L-BFGS - -using Optimization, OptimizationLBFGSB, Zygote - -rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 -x0 = zeros(2) -p = [1.0, 100.0] - -optf = OptimizationFunction(rosenbrock, AutoZygote()) -prob = Optimization.OptimizationProblem(optf, x0, p) -sol = solve(prob, LBFGS()) -``` - -### With nonlinear and bounds constraints - -```@example L-BFGS - -function con2_c(res, x, p) - res .= [x[1]^2 + x[2]^2, (x[2] * sin(x[1]) + x[1]) - 5] -end - -optf = OptimizationFunction(rosenbrock, AutoZygote(), cons = con2_c) -prob = OptimizationProblem(optf, x0, p, lcons = [1.0, -Inf], - ucons = [1.0, 0.0], lb = [-1.0, -1.0], - ub = [1.0, 1.0]) -res = solve(prob, LBFGS(), maxiters = 100) -``` - -### Train NN with Sophia - -```@example Sophia - -using Optimization, Lux, Zygote, MLUtils, Statistics, Plots, Random, ComponentArrays - -x = rand(10000) -y = sin.(x) -data = MLUtils.DataLoader((x, y), batchsize = 100) - -# Define the neural network -model = Chain(Dense(1, 32, tanh), Dense(32, 1)) -ps, st = Lux.setup(Random.default_rng(), model) -ps_ca = ComponentArray(ps) -smodel = StatefulLuxLayer{true}(model, nothing, st) - -function callback(state, l) - state.iter % 25 == 1 && @show "Iteration: $(state.iter), Loss: $l" - return l < 1e-1 ## Terminate if loss is small -end - -function loss(ps, data) - x_batch, y_batch = data - ypred = [smodel([x_batch[i]], ps)[1] for i in eachindex(x_batch)] - return sum(abs2, ypred .- y_batch) -end - -optf = OptimizationFunction(loss, AutoZygote()) -prob = OptimizationProblem(optf, ps_ca, data) - -res = Optimization.solve(prob, Optimization.Sophia(), callback = callback, epochs = 100) -``` +For examples of using these solvers, please refer to their respective documentation pages. diff --git a/docs/src/optimization_packages/sophia.md b/docs/src/optimization_packages/sophia.md new file mode 100644 index 000000000..37e4a9b62 --- /dev/null +++ b/docs/src/optimization_packages/sophia.md @@ -0,0 +1,52 @@ +# OptimizationSophia.jl + +[`OptimizationSophia.jl`](https://github.com/SciML/Optimization.jl/tree/master/lib/OptimizationSophia) is a package that provides the Sophia optimizer for neural network training. + +## Installation + +To use this package, install the `OptimizationSophia` package: + +```julia +using Pkg +Pkg.add("OptimizationSophia") +``` + +## Methods + +```@docs +OptimizationSophia.Sophia +``` + +## Examples + +### Train NN with Sophia + +```@example Sophia +using OptimizationBase, OptimizationSophia, Lux, ADTypes, Zygote, MLUtils, Statistics, Random, ComponentArrays + +x = rand(10000) +y = sin.(x) +data = MLUtils.DataLoader((x, y), batchsize = 100) + +# Define the neural network +model = Chain(Dense(1, 32, tanh), Dense(32, 1)) +ps, st = Lux.setup(Random.default_rng(), model) +ps_ca = ComponentArray(ps) +smodel = StatefulLuxLayer{true}(model, nothing, st) + +function callback(state, l) + state.iter % 25 == 1 && @show "Iteration: $(state.iter), Loss: $l" + return l < 1e-1 ## Terminate if loss is small +end + +function loss(ps, data) + x_batch, y_batch = data + ypred = [smodel([x_batch[i]], ps)[1] for i in eachindex(x_batch)] + return sum(abs2, ypred .- y_batch) +end + +optf = OptimizationFunction(loss, ADTypes.AutoZygote()) +prob = OptimizationProblem(optf, ps_ca, data) + +res = solve(prob, OptimizationSophia.Sophia(), callback = callback, epochs = 100) +``` From 5e27dd564c7afd0b693596ad617f85d23f94c324 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Tue, 25 Nov 2025 04:20:55 +0200 Subject: [PATCH 3/8] Add missing packages to the docs env Co-authored-by: Claude --- docs/Project.toml | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/docs/Project.toml b/docs/Project.toml index 83d980220..bca877e13 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -2,6 +2,7 @@ ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b" AmplNLWriter = "7c4d4715-977e-5154-bfe0-e096adeac482" ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66" +DifferentiationInterface = "a0c0ee7d-e4b9-4e03-894e-1c5f64a51d63" Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4" FiniteDiff = "6a86dc24-6348-571c-b903-95158fe2bd41" ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210" @@ -19,6 +20,7 @@ NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6" NLPModelsTest = "7998695d-6960-4d3a-85c4-e1bceb8cd856" NLopt = "76087f3c-5699-56af-9a33-bf431cd00edd" Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba" +OptimizationAuglag = "2ea93f80-9333-43a1-a68d-1f53b957a421" OptimizationBBO = "3e6eede4-6085-4f62-9a71-46d9bc1eb92b" OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" OptimizationCMAEvolutionStrategy = "bd407f91-200f-4536-9381-e4ba712f53f8" @@ -27,18 +29,26 @@ OptimizationGCMAES = "6f0a0517-dbc2-4a7a-8a20-99ae7f27e911" OptimizationIpopt = "43fad042-7963-4b32-ab19-e2a4f9a67124" OptimizationLBFGSB = "22f7324a-a79d-40f2-bebe-3af60c77bd15" OptimizationMOI = "fd9f6733-72f4-499f-8506-86b2bdd0dea1" +OptimizationMadNLP = "5d9c809f-c847-4062-9fba-1793bbfef577" OptimizationManopt = "e57b7fff-7ee7-4550-b4f0-90e9476e9fb6" OptimizationMetaheuristics = "3aafef2f-86ae-4776-b337-85a36adf0b55" +OptimizationMultistartOptimization = "e4316d97-8bbb-4fd3-a7d8-3851d2a72823" OptimizationNLPModels = "064b21be-54cf-11ef-1646-cdfee32b588f" OptimizationNLopt = "4e6fcdb7-1186-4e1f-a706-475e75c168bb" OptimizationNOMAD = "2cab0595-8222-4775-b714-9828e6a9e01b" +OptimizationODE = "dfa73e59-e644-4d8a-bf84-188d7ecb34e4" OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e" OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1" OptimizationPRIMA = "72f8369c-a2ea-4298-9126-56167ce9cbc2" OptimizationPolyalgorithms = "500b13db-7e66-49ce-bda4-eed966be6282" +OptimizationPyCMA = "fb0822aa-1fe5-41d8-99a6-e7bf6c238d3b" +OptimizationQuadDIRECT = "842ac81e-713d-465f-80f7-84eddaced298" +OptimizationSciPy = "cce07bd8-c79b-4b00-aee8-8db9cce22837" +OptimizationSophia = "892fee11-dca1-40d6-b698-84ba0d87399a" OptimizationSpeedMapping = "3d669222-0d7d-4eb9-8a9f-d8528b0d9b91" OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed" Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80" +QuadDIRECT = "dae52e8d-d666-5120-a592-9e15c33b8d7a" Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" ReverseDiff = "37e2e3b7-166d-5795-8a7a-e32c996b4267" SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462" @@ -50,6 +60,7 @@ Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f" [sources] Optimization = {path = ".."} +OptimizationAuglag = {path = "../lib/OptimizationAuglag"} OptimizationBBO = {path = "../lib/OptimizationBBO"} OptimizationBase = {path = "../lib/OptimizationBase"} OptimizationCMAEvolutionStrategy = {path = "../lib/OptimizationCMAEvolutionStrategy"} @@ -58,15 +69,22 @@ OptimizationGCMAES = {path = "../lib/OptimizationGCMAES"} OptimizationIpopt = {path = "../lib/OptimizationIpopt"} OptimizationLBFGSB = {path = "../lib/OptimizationLBFGSB"} OptimizationMOI = {path = "../lib/OptimizationMOI"} +OptimizationMadNLP = {path = "../lib/OptimizationMadNLP"} OptimizationManopt = {path = "../lib/OptimizationManopt"} OptimizationMetaheuristics = {path = "../lib/OptimizationMetaheuristics"} +OptimizationMultistartOptimization = {path = "../lib/OptimizationMultistartOptimization"} OptimizationNLPModels = {path = "../lib/OptimizationNLPModels"} OptimizationNLopt = {path = "../lib/OptimizationNLopt"} OptimizationNOMAD = {path = "../lib/OptimizationNOMAD"} +OptimizationODE = {path = "../lib/OptimizationODE"} OptimizationOptimJL = {path = "../lib/OptimizationOptimJL"} OptimizationOptimisers = {path = "../lib/OptimizationOptimisers"} OptimizationPRIMA = {path = "../lib/OptimizationPRIMA"} OptimizationPolyalgorithms = {path = "../lib/OptimizationPolyalgorithms"} +OptimizationPyCMA = {path = "../lib/OptimizationPyCMA"} +OptimizationQuadDIRECT = {path = "../lib/OptimizationQuadDIRECT"} +OptimizationSciPy = {path = "../lib/OptimizationSciPy"} +OptimizationSophia = {path = "../lib/OptimizationSophia"} OptimizationSpeedMapping = {path = "../lib/OptimizationSpeedMapping"} [compat] @@ -89,6 +107,7 @@ NLPModels = "0.21" NLPModelsTest = "0.10" NLopt = "0.6, 1" Optimization = "5" +OptimizationAuglag = "1" OptimizationBBO = "0.4" OptimizationBase = "4" OptimizationCMAEvolutionStrategy = "0.3" @@ -96,15 +115,21 @@ OptimizationEvolutionary = "0.4" OptimizationGCMAES = "0.3" OptimizationIpopt = "0.2" OptimizationMOI = "0.5" +OptimizationMadNLP = "0.3" OptimizationManopt = "1" OptimizationMetaheuristics = "0.3" +OptimizationMultistartOptimization = "0.3" OptimizationNLPModels = "0.0.2, 1" OptimizationNLopt = "0.3" OptimizationNOMAD = "0.3" +OptimizationODE = "0.1" OptimizationOptimJL = "0.4" OptimizationOptimisers = "0.3" OptimizationPRIMA = "0.3" OptimizationPolyalgorithms = "0.3" +OptimizationQuadDIRECT = "0.3" +OptimizationSciPy = "0.4" +OptimizationSophia = "1" OptimizationSpeedMapping = "0.2" OrdinaryDiffEq = "6" Plots = "1" From 1b540a0444ff21681420afc45140fa4be4738199 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Tue, 25 Nov 2025 04:27:47 +0200 Subject: [PATCH 4/8] Fix namespacing issues Co-authored-by: Claude --- docs/make.jl | 11 +++-- docs/src/API/ad.md | 16 +++---- docs/src/API/optimization_state.md | 2 +- docs/src/examples/rosenbrock.md | 45 ++++++++++--------- docs/src/getting_started.md | 10 ++--- .../optimization_packages/blackboxoptim.md | 2 +- .../cmaevolutionstrategy.md | 2 +- .../src/optimization_packages/evolutionary.md | 2 +- docs/src/optimization_packages/gcmaes.md | 7 +-- docs/src/optimization_packages/ipopt.md | 2 +- docs/src/optimization_packages/manopt.md | 8 ++-- .../optimization_packages/mathoptinterface.md | 8 ++-- .../optimization_packages/metaheuristics.md | 2 +- .../multistartoptimization.md | 10 ++--- docs/src/optimization_packages/nlopt.md | 16 +++---- docs/src/optimization_packages/optim.md | 38 ++++++++-------- docs/src/optimization_packages/polyopt.md | 4 +- docs/src/optimization_packages/prima.md | 12 ++--- docs/src/optimization_packages/quaddirect.md | 2 +- docs/src/optimization_packages/scipy.md | 4 +- .../src/optimization_packages/speedmapping.md | 4 +- docs/src/tutorials/certification.md | 10 ++--- docs/src/tutorials/constraints.md | 7 +-- docs/src/tutorials/ensemble.md | 11 ++--- docs/src/tutorials/linearandinteger.md | 10 ++--- docs/src/tutorials/minibatch.md | 8 ++-- docs/src/tutorials/remakecomposition.md | 6 +-- docs/src/tutorials/reusage_interface.md | 4 +- docs/src/tutorials/symbolic.md | 2 +- .../src/OptimizationSophia.jl | 2 +- 30 files changed, 135 insertions(+), 132 deletions(-) diff --git a/docs/make.jl b/docs/make.jl index b5e3b232c..801862da1 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -1,16 +1,15 @@ using Documenter, Optimization -using FiniteDiff, ForwardDiff, ModelingToolkit, ReverseDiff, Tracker, Zygote -using ADTypes +using OptimizationLBFGSB, OptimizationSophia -cp("./docs/Manifest.toml", "./docs/src/assets/Manifest.toml", force = true) -cp("./docs/Project.toml", "./docs/src/assets/Project.toml", force = true) +cp(joinpath(@__DIR__, "Manifest.toml"), joinpath(@__DIR__, "src/assets/Manifest.toml"), force = true) +cp(joinpath(@__DIR__, "Project.toml"), joinpath(@__DIR__, "src/assets/Project.toml"), force = true) include("pages.jl") makedocs(sitename = "Optimization.jl", authors = "Chris Rackauckas, Vaibhav Kumar Dixit et al.", - modules = [Optimization, Optimization.SciMLBase, Optimization.OptimizationBase, - FiniteDiff, ForwardDiff, ModelingToolkit, ReverseDiff, Tracker, Zygote, ADTypes], + modules = [Optimization, Optimization.SciMLBase, Optimization.OptimizationBase, Optimization.ADTypes, + OptimizationLBFGSB, OptimizationSophia], clean = true, doctest = false, linkcheck = true, warnonly = [:missing_docs, :cross_references], format = Documenter.HTML(assets = ["assets/favicon.ico"], diff --git a/docs/src/API/ad.md b/docs/src/API/ad.md index 7fc32ebe5..f67090621 100644 --- a/docs/src/API/ad.md +++ b/docs/src/API/ad.md @@ -13,15 +13,15 @@ The choices for the auto-AD fill-ins with quick descriptions are: ## Automatic Differentiation Choice API -The following sections describe the Auto-AD choices in detail. +The following sections describe the Auto-AD choices in detail. These types are defined in the [ADTypes.jl](https://github.com/SciML/ADTypes.jl) package. ```@docs -OptimizationBase.AutoForwardDiff -OptimizationBase.AutoFiniteDiff -OptimizationBase.AutoReverseDiff -OptimizationBase.AutoZygote -OptimizationBase.AutoTracker -OptimizationBase.AutoSymbolics -OptimizationBase.AutoEnzyme +ADTypes.AutoForwardDiff +ADTypes.AutoFiniteDiff +ADTypes.AutoReverseDiff +ADTypes.AutoZygote +ADTypes.AutoTracker +ADTypes.AutoSymbolics +ADTypes.AutoEnzyme ADTypes.AutoMooncake ``` diff --git a/docs/src/API/optimization_state.md b/docs/src/API/optimization_state.md index a758f79a2..3dcef061d 100644 --- a/docs/src/API/optimization_state.md +++ b/docs/src/API/optimization_state.md @@ -1,5 +1,5 @@ # [OptimizationState](@id optstate) ```@docs -Optimization.OptimizationState +OptimizationBase.OptimizationState ``` diff --git a/docs/src/examples/rosenbrock.md b/docs/src/examples/rosenbrock.md index 09d8d697a..380ba3d93 100644 --- a/docs/src/examples/rosenbrock.md +++ b/docs/src/examples/rosenbrock.md @@ -41,15 +41,16 @@ An optimization problem can now be defined and solved to estimate the values for ```@example rosenbrock # Define the problem to solve -using Optimization, ForwardDiff, Zygote +using SciMLBase, OptimizationBase +using ADTypes, ForwardDiff, Zygote rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) _p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) +f = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) l1 = rosenbrock(x0, _p) -prob = OptimizationProblem(f, x0, _p) +prob = SciMLBase.OptimizationProblem(f, x0, _p) ``` ## Optim.jl Solvers @@ -59,19 +60,19 @@ prob = OptimizationProblem(f, x0, _p) ```@example rosenbrock using OptimizationOptimJL sol = solve(prob, SimulatedAnnealing()) -prob = OptimizationProblem(f, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8]) +prob = SciMLBase.OptimizationProblem(f, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8]) sol = solve(prob, SAMIN()) l1 = rosenbrock(x0, _p) -prob = OptimizationProblem(rosenbrock, x0, _p) +prob = SciMLBase.OptimizationProblem(rosenbrock, x0, _p) sol = solve(prob, NelderMead()) ``` ### Now a gradient-based optimizer with forward-mode automatic differentiation ```@example rosenbrock -optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = OptimizationProblem(optf, x0, _p) +optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(optf, x0, _p) sol = solve(prob, BFGS()) ``` @@ -91,19 +92,19 @@ sol = solve(prob, Optim.KrylovTrustRegion()) ```@example rosenbrock cons = (res, x, p) -> res .= [x[1]^2 + x[2]^2] -optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = cons) +optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff(); cons = cons) -prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf]) +prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf]) sol = solve(prob, IPNewton()) # Note that -Inf < x[1]^2 + x[2]^2 < Inf is always true -prob = OptimizationProblem(optf, x0, _p, lcons = [-5.0], ucons = [10.0]) +prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-5.0], ucons = [10.0]) sol = solve(prob, IPNewton()) # Again, -5.0 < x[1]^2 + x[2]^2 < 10.0 -prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf], +prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf], lb = [-500.0, -500.0], ub = [50.0, 50.0]) sol = solve(prob, IPNewton()) -prob = OptimizationProblem(optf, x0, _p, lcons = [0.5], ucons = [0.5], +prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [0.5], ucons = [0.5], lb = [-500.0, -500.0], ub = [50.0, 50.0]) sol = solve(prob, IPNewton()) @@ -118,8 +119,8 @@ function con_c(res, x, p) res .= [x[1]^2 + x[2]^2] end -optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = con_c) -prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [0.25^2]) +optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff(); cons = con_c) +prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [0.25^2]) sol = solve(prob, IPNewton()) # -Inf < cons_circ(sol.u, _p) = 0.25^2 ``` @@ -139,8 +140,8 @@ function con2_c(res, x, p) res .= [x[1]^2 + x[2]^2, x[2] * sin(x[1]) - x[1]] end -optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote(); cons = con2_c) -prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf, -Inf], ucons = [100.0, 100.0]) +optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoZygote(); cons = con2_c) +prob = SciMLBase.OptimizationProblem(optf, x0, _p, lcons = [-Inf, -Inf], ucons = [100.0, 100.0]) sol = solve(prob, Ipopt.Optimizer()) ``` @@ -148,8 +149,8 @@ sol = solve(prob, Ipopt.Optimizer()) ```@example rosenbrock import OptimizationOptimisers -optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote()) -prob = OptimizationProblem(optf, x0, _p) +optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoZygote()) +prob = SciMLBase.OptimizationProblem(optf, x0, _p) sol = solve(prob, OptimizationOptimisers.Adam(0.05), maxiters = 1000, progress = false) ``` @@ -164,8 +165,8 @@ sol = solve(prob, CMAEvolutionStrategyOpt()) ```@example rosenbrock using OptimizationNLopt, ModelingToolkit -optf = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics()) -prob = OptimizationProblem(optf, x0, _p) +optf = SciMLBase.OptimizationFunction(rosenbrock, ADTypes.AutoSymbolics()) +prob = SciMLBase.OptimizationProblem(optf, x0, _p) sol = solve(prob, Opt(:LN_BOBYQA, 2)) sol = solve(prob, Opt(:LD_LBFGS, 2)) @@ -174,7 +175,7 @@ sol = solve(prob, Opt(:LD_LBFGS, 2)) ### Add some box constraints and solve with a few NLopt.jl methods ```@example rosenbrock -prob = OptimizationProblem(optf, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8]) +prob = SciMLBase.OptimizationProblem(optf, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8]) sol = solve(prob, Opt(:LD_LBFGS, 2)) sol = solve(prob, Opt(:G_MLSL_LDS, 2), local_method = Opt(:LD_LBFGS, 2), maxiters = 10000) #a global optimizer with random starts of local optimization ``` @@ -183,7 +184,7 @@ sol = solve(prob, Opt(:G_MLSL_LDS, 2), local_method = Opt(:LD_LBFGS, 2), maxiter ```@example rosenbrock using OptimizationBBO -prob = Optimization.OptimizationProblem(rosenbrock, [0.0, 0.3], _p, lb = [-1.0, 0.2], +prob = SciMLBase.OptimizationProblem(rosenbrock, [0.0, 0.3], _p, lb = [-1.0, 0.2], ub = [0.8, 0.43]) sol = solve(prob, BBO_adaptive_de_rand_1_bin()) # -1.0 ≤ x[1] ≤ 0.8, 0.2 ≤ x[2] ≤ 0.43 ``` diff --git a/docs/src/getting_started.md b/docs/src/getting_started.md index 0c4164824..3b40e8723 100644 --- a/docs/src/getting_started.md +++ b/docs/src/getting_started.md @@ -14,12 +14,12 @@ The simplest copy-pasteable code using a quasi-Newton method (LBFGS) to solve th ```@example intro # Import the package and define the problem to optimize -using Optimization, OptimizationLBFGSB, Zygote +using OptimizationBase, OptimizationLBFGSB, ADTypes, Zygote rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2 u0 = zeros(2) p = [1.0, 100.0] -optf = OptimizationFunction(rosenbrock, AutoZygote()) +optf = OptimizationFunction(rosenbrock, ADTypes.AutoZygote()) prob = OptimizationProblem(optf, u0, p) sol = solve(prob, OptimizationLBFGSB.LBFGSB()) @@ -131,8 +131,8 @@ automatically construct the derivative functions using ForwardDiff.jl. This looks like: ```@example intro -using ForwardDiff -optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) +using ForwardDiff, ADTypes +optf = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) prob = OptimizationProblem(optf, u0, p) sol = solve(prob, OptimizationOptimJL.BFGS()) ``` @@ -155,7 +155,7 @@ We can demonstrate this via: ```@example intro using Zygote -optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote()) +optf = OptimizationFunction(rosenbrock, ADTypes.AutoZygote()) prob = OptimizationProblem(optf, u0, p) sol = solve(prob, OptimizationOptimJL.BFGS()) ``` diff --git a/docs/src/optimization_packages/blackboxoptim.md b/docs/src/optimization_packages/blackboxoptim.md index ca5b2385b..3b0356943 100644 --- a/docs/src/optimization_packages/blackboxoptim.md +++ b/docs/src/optimization_packages/blackboxoptim.md @@ -63,7 +63,7 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(), maxiters = 100000, maxtime = 1000.0) ``` diff --git a/docs/src/optimization_packages/cmaevolutionstrategy.md b/docs/src/optimization_packages/cmaevolutionstrategy.md index c043d960c..785140e1b 100644 --- a/docs/src/optimization_packages/cmaevolutionstrategy.md +++ b/docs/src/optimization_packages/cmaevolutionstrategy.md @@ -30,6 +30,6 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, CMAEvolutionStrategyOpt()) ``` diff --git a/docs/src/optimization_packages/evolutionary.md b/docs/src/optimization_packages/evolutionary.md index 9fa582c74..6be2e1621 100644 --- a/docs/src/optimization_packages/evolutionary.md +++ b/docs/src/optimization_packages/evolutionary.md @@ -38,6 +38,6 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, Evolutionary.CMAES(μ = 40, λ = 100)) ``` diff --git a/docs/src/optimization_packages/gcmaes.md b/docs/src/optimization_packages/gcmaes.md index e7a1922a1..54d1fcdeb 100644 --- a/docs/src/optimization_packages/gcmaes.md +++ b/docs/src/optimization_packages/gcmaes.md @@ -30,14 +30,15 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, GCMAESOpt()) ``` We can also utilize the gradient information of the optimization problem to aid the optimization as follows: ```@example GCMAES -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +using ADTypes, ForwardDiff +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, GCMAESOpt()) ``` diff --git a/docs/src/optimization_packages/ipopt.md b/docs/src/optimization_packages/ipopt.md index 0e15b5a33..d17a5bd9a 100644 --- a/docs/src/optimization_packages/ipopt.md +++ b/docs/src/optimization_packages/ipopt.md @@ -45,7 +45,7 @@ The algorithm supports: ### Basic Usage ```julia -using Optimization, OptimizationIpopt +using OptimizationBase, OptimizationIpopt # Create optimizer with default settings opt = IpoptOptimizer() diff --git a/docs/src/optimization_packages/manopt.md b/docs/src/optimization_packages/manopt.md index 422337e15..f80c3c0f7 100644 --- a/docs/src/optimization_packages/manopt.md +++ b/docs/src/optimization_packages/manopt.md @@ -39,7 +39,7 @@ function or `OptimizationProblem`. The Rosenbrock function on the Euclidean manifold can be optimized using the `GradientDescentOptimizer` as follows: ```@example Manopt -using Optimization, OptimizationManopt, Manifolds, LinearAlgebra +using Optimization, OptimizationManopt, Manifolds, LinearAlgebra, ADTypes, Zygote rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] @@ -49,7 +49,7 @@ R2 = Euclidean(2) stepsize = Manopt.ArmijoLinesearch(R2) opt = OptimizationManopt.GradientDescentOptimizer() -optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote()) +optf = OptimizationFunction(rosenbrock, ADTypes.AutoZygote()) prob = OptimizationProblem( optf, x0, p; manifold = R2, stepsize = stepsize) @@ -67,7 +67,7 @@ q = Matrix{Float64}(I, 5, 5) .+ 2.0 data2 = [exp(M, q, σ * rand(M; vector_at = q)) for i in 1:m] f(x, p = nothing) = sum(distance(M, x, data2[i])^2 for i in 1:m) -optf = OptimizationFunction(f, Optimization.AutoZygote()) +optf = OptimizationFunction(f, ADTypes.AutoZygote()) prob = OptimizationProblem(optf, data2[1]; manifold = M, maxiters = 1000) function closed_form_solution!(M::SymmetricPositiveDefinite, q, L, U, p, X) @@ -89,7 +89,7 @@ N = m U = mean(data2) L = inv(sum(1 / N * inv(matrix) for matrix in data2)) -optf = OptimizationFunction(f, Optimization.AutoZygote()) +optf = OptimizationFunction(f, ADTypes.AutoZygote()) prob = OptimizationProblem(optf, U; manifold = M, maxiters = 1000) sol = Optimization.solve( diff --git a/docs/src/optimization_packages/mathoptinterface.md b/docs/src/optimization_packages/mathoptinterface.md index 1c26636a6..d5803a262 100644 --- a/docs/src/optimization_packages/mathoptinterface.md +++ b/docs/src/optimization_packages/mathoptinterface.md @@ -69,13 +69,13 @@ sol = solve(prob, Ipopt.Optimizer(); option_name = option_value, ...) detail. ```@example MOI -using Optimization, OptimizationMOI, Juniper, Ipopt +using Optimization, OptimizationMOI, Juniper, Ipopt, ADTypes, ForwardDiff rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) _p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(f, x0, _p) +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(f, x0, _p) opt = OptimizationMOI.MOI.OptimizerWithAttributes(Juniper.Optimizer, "nl_solver" => OptimizationMOI.MOI.OptimizerWithAttributes(Ipopt.Optimizer, @@ -105,7 +105,7 @@ W = 4.0 u0 = [0.0, 0.0, 0.0, 1.0] optfun = OptimizationFunction((u, p) -> -v'u, cons = (res, u, p) -> res .= w'u, - Optimization.AutoForwardDiff()) + ADTypes.AutoForwardDiff()) optprob = OptimizationProblem(optfun, u0; lb = zero.(u0), ub = one.(u0), int = ones(Bool, length(u0)), diff --git a/docs/src/optimization_packages/metaheuristics.md b/docs/src/optimization_packages/metaheuristics.md index ae1694bcc..2dc52353d 100644 --- a/docs/src/optimization_packages/metaheuristics.md +++ b/docs/src/optimization_packages/metaheuristics.md @@ -57,7 +57,7 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, ECA(), maxiters = 100000, maxtime = 1000.0) ``` diff --git a/docs/src/optimization_packages/multistartoptimization.md b/docs/src/optimization_packages/multistartoptimization.md index 15e14625b..8c575ef15 100644 --- a/docs/src/optimization_packages/multistartoptimization.md +++ b/docs/src/optimization_packages/multistartoptimization.md @@ -32,12 +32,12 @@ constraint equations. However, lower and upper constraints set by `lb` and `ub` The Rosenbrock function can be optimized using `MultistartOptimization.TikTak()` with 100 initial points and the local method `NLopt.LD_LBFGS()` as follows: ```julia -using Optimization, OptimizationMultistartOptimization, OptimizationNLopt +using Optimization, OptimizationMultistartOptimization, OptimizationNLopt, ADTypes, ForwardDiff rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, MultistartOptimization.TikTak(100), NLopt.LD_LBFGS()) ``` @@ -45,7 +45,7 @@ You can use any `Optimization` optimizers you like. The global method of the `Mu ```julia using OptimizationOptimJL -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, MultistartOptimization.TikTak(100), LBFGS(), maxiters = 5) ``` diff --git a/docs/src/optimization_packages/nlopt.md b/docs/src/optimization_packages/nlopt.md index 06234b6df..cb112af7f 100644 --- a/docs/src/optimization_packages/nlopt.md +++ b/docs/src/optimization_packages/nlopt.md @@ -99,7 +99,7 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, NLopt.LN_NELDERMEAD()) ``` @@ -126,12 +126,12 @@ Gradient-based optimizers are optimizers which utilize the gradient information The Rosenbrock function can be optimized using `NLopt.LD_LBFGS()` as follows: ```@example NLopt2 -using Optimization, OptimizationNLopt +using Optimization, OptimizationNLopt, ADTypes, ForwardDiff rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, NLopt.LD_LBFGS()) ``` @@ -169,7 +169,7 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, NLopt.GN_DIRECT(), maxtime = 10.0) ``` @@ -180,12 +180,12 @@ The Rosenbrock function can be optimized using `NLopt.G_MLSL_LDS()` with `NLopt. The local optimizer maximum iterations are set via `local_maxiters`: ```@example NLopt4 -using Optimization, OptimizationNLopt +using Optimization, OptimizationNLopt, ADTypes, ForwardDiff rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, NLopt.G_MLSL_LDS(), local_method = NLopt.LD_LBFGS(), maxtime = 10.0, local_maxiters = 10) ``` diff --git a/docs/src/optimization_packages/optim.md b/docs/src/optimization_packages/optim.md index 72ac17bd3..84b21a623 100644 --- a/docs/src/optimization_packages/optim.md +++ b/docs/src/optimization_packages/optim.md @@ -69,13 +69,13 @@ For a more extensive documentation of all the algorithms and options, please con The Rosenbrock function with constraints can be optimized using the `Optim.IPNewton()` as follows: ```@example Optim1 -using Optimization, OptimizationOptimJL +using Optimization, OptimizationOptimJL, ADTypes, ForwardDiff rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 cons = (res, x, p) -> res .= [x[1]^2 + x[2]^2] x0 = zeros(2) p = [1.0, 100.0] -prob = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = cons) -prob = Optimization.OptimizationProblem(prob, x0, p, lcons = [-5.0], ucons = [10.0]) +prob = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff(); cons = cons) +prob = SciMLBase.OptimizationProblem(prob, x0, p, lcons = [-5.0], ucons = [10.0]) sol = solve(prob, IPNewton()) ``` @@ -83,7 +83,7 @@ See also in the `Optim.jl` documentation the [Nonlinear constrained optimization ### Derivative-Free -Derivative-free optimizers are optimizers that can be used even in cases where no derivatives or automatic differentiation is specified. While they tend to be less efficient than derivative-based optimizers, they can be easily applied to cases where defining derivatives is difficult. Note that while these methods do not support general constraints, all support bounds constraints via `lb` and `ub` in the `Optimization.OptimizationProblem`. +Derivative-free optimizers are optimizers that can be used even in cases where no derivatives or automatic differentiation is specified. While they tend to be less efficient than derivative-based optimizers, they can be easily applied to cases where defining derivatives is difficult. Note that while these methods do not support general constraints, all support bounds constraints via `lb` and `ub` in the `SciMLBase.OptimizationProblem`. `Optim.jl` implements the following derivative-free algorithms: @@ -119,7 +119,7 @@ using Optimization, OptimizationOptimJL rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -prob = Optimization.OptimizationProblem(rosenbrock, x0, p) +prob = SciMLBase.OptimizationProblem(rosenbrock, x0, p) sol = solve(prob, Optim.NelderMead()) ``` @@ -275,12 +275,12 @@ Gradient-based optimizers are optimizers which utilize the gradient information The Rosenbrock function can be optimized using the `Optim.LBFGS()` as follows: ```@example Optim3 -using Optimization, OptimizationOptimJL +using Optimization, OptimizationOptimJL, ADTypes, ForwardDiff rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -optprob = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(optprob, x0, p, lb = [-1.0, -1.0], ub = [0.8, 0.8]) +optprob = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(optprob, x0, p, lb = [-1.0, -1.0], ub = [0.8, 0.8]) sol = solve(prob, Optim.LBFGS()) ``` @@ -336,12 +336,12 @@ the Hessian in order to be appropriate. The Rosenbrock function can be optimized using the `Optim.Newton()` as follows: ```@example Optim4 -using Optimization, OptimizationOptimJL, ModelingToolkit +using Optimization, OptimizationOptimJL, ADTypes, ModelingToolkit, Symbolics rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics()) -prob = Optimization.OptimizationProblem(f, x0, p) +f = OptimizationFunction(rosenbrock, ADTypes.AutoSymbolics()) +prob = SciMLBase.OptimizationProblem(f, x0, p) sol = solve(prob, Optim.Newton()) ``` @@ -374,12 +374,12 @@ special case when considering conditioning of the Hessian. The Rosenbrock function can be optimized using the `Optim.KrylovTrustRegion()` as follows: ```@example Optim5 -using Optimization, OptimizationOptimJL +using Optimization, OptimizationOptimJL, ADTypes, ForwardDiff rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -optprob = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(optprob, x0, p) +optprob = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(optprob, x0, p) sol = solve(prob, Optim.KrylovTrustRegion()) ``` @@ -388,7 +388,7 @@ sol = solve(prob, Optim.KrylovTrustRegion()) ### Without Constraint Equations The following method in [`Optim`](https://github.com/JuliaNLSolvers/Optim.jl) performs global optimization on problems with or without -box constraints. It works both with and without lower and upper bounds set by `lb` and `ub` in the `Optimization.OptimizationProblem`. +box constraints. It works both with and without lower and upper bounds set by `lb` and `ub` in the `SciMLBase.OptimizationProblem`. - [`Optim.ParticleSwarm()`](https://julianlsolvers.github.io/Optim.jl/stable/algo/particle_swarm/): **Particle Swarm Optimization** @@ -405,7 +405,7 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, Optim.ParticleSwarm(lower = prob.lb, upper = prob.ub, n_particles = 100)) ``` @@ -432,11 +432,11 @@ box constraints. The Rosenbrock function can be optimized using the `Optim.SAMIN()` as follows: ```@example Optim7 -using Optimization, OptimizationOptimJL +using Optimization, OptimizationOptimJL, ADTypes, ForwardDiff rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) sol = solve(prob, Optim.SAMIN()) ``` diff --git a/docs/src/optimization_packages/polyopt.md b/docs/src/optimization_packages/polyopt.md index ad089b0f0..1003ea415 100644 --- a/docs/src/optimization_packages/polyopt.md +++ b/docs/src/optimization_packages/polyopt.md @@ -18,12 +18,12 @@ Right now we support the following polyalgorithms. `PolyOpt`: Runs Adam followed by BFGS for an equal number of iterations. This is useful in scientific machine learning use cases, by exploring the loss surface with the stochastic optimizer and converging to the minima faster with BFGS. ```@example polyopt -using Optimization, OptimizationPolyalgorithms +using Optimization, OptimizationPolyalgorithms, ADTypes, ForwardDiff rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) _p = [1.0, 100.0] -optprob = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) +optprob = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) prob = OptimizationProblem(optprob, x0, _p) sol = Optimization.solve(prob, PolyOpt(), maxiters = 1000) ``` diff --git a/docs/src/optimization_packages/prima.md b/docs/src/optimization_packages/prima.md index e225aafe8..f631fa71c 100644 --- a/docs/src/optimization_packages/prima.md +++ b/docs/src/optimization_packages/prima.md @@ -26,7 +26,7 @@ The five Powell's algorithms of the prima library are provided by the PRIMA.jl p `COBYLA`: (Constrained Optimization BY Linear Approximations) is for general constrained problems with bound constraints, non-linear constraints, linear equality constraints, and linear inequality constraints. ```@example PRIMA -using Optimization, OptimizationPRIMA +using OptimizationBase, OptimizationPRIMA rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) @@ -34,18 +34,18 @@ _p = [1.0, 100.0] prob = OptimizationProblem(rosenbrock, x0, _p) -sol = Optimization.solve(prob, UOBYQA(), maxiters = 1000) +sol = solve(prob, UOBYQA(), maxiters = 1000) -sol = Optimization.solve(prob, NEWUOA(), maxiters = 1000) +sol = solve(prob, NEWUOA(), maxiters = 1000) -sol = Optimization.solve(prob, BOBYQA(), maxiters = 1000) +sol = solve(prob, BOBYQA(), maxiters = 1000) -sol = Optimization.solve(prob, LINCOA(), maxiters = 1000) +sol = solve(prob, LINCOA(), maxiters = 1000) function con2_c(res, x, p) res .= [x[1] + x[2], x[2] * sin(x[1]) - x[1]] end optprob = OptimizationFunction(rosenbrock, AutoForwardDiff(), cons = con2_c) prob = OptimizationProblem(optprob, x0, _p, lcons = [1, -100], ucons = [1, 100]) -sol = Optimization.solve(prob, COBYLA(), maxiters = 1000) +sol = solve(prob, COBYLA(), maxiters = 1000) ``` diff --git a/docs/src/optimization_packages/quaddirect.md b/docs/src/optimization_packages/quaddirect.md index 66e5c972d..60892574a 100644 --- a/docs/src/optimization_packages/quaddirect.md +++ b/docs/src/optimization_packages/quaddirect.md @@ -40,6 +40,6 @@ rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] f = OptimizationFunction(rosenbrock) -prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) +prob = SciMLBase.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0]) solve(prob, QuadDirect(), splits = ([-0.9, 0, 0.9], [-0.8, 0, 0.8])) ``` diff --git a/docs/src/optimization_packages/scipy.md b/docs/src/optimization_packages/scipy.md index 52872926b..896bc418a 100644 --- a/docs/src/optimization_packages/scipy.md +++ b/docs/src/optimization_packages/scipy.md @@ -62,13 +62,13 @@ Support for `ScipyRoot`, `ScipyRootScalar` and `ScipyLeastSquares` is available ### Unconstrained minimisation ```@example SciPy1 -using Optimization, OptimizationSciPy +using Optimization, OptimizationSciPy, ADTypes, Zygote rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoZygote()) +f = OptimizationFunction(rosenbrock, ADTypes.AutoZygote()) prob = OptimizationProblem(f, x0, p) sol = solve(prob, ScipyBFGS()) diff --git a/docs/src/optimization_packages/speedmapping.md b/docs/src/optimization_packages/speedmapping.md index 75e1e81bb..ba4138313 100644 --- a/docs/src/optimization_packages/speedmapping.md +++ b/docs/src/optimization_packages/speedmapping.md @@ -25,11 +25,11 @@ If no AD backend is defined via `OptimizationFunction` the gradient is calculate The Rosenbrock function can be optimized using the `SpeedMappingOpt()` with and without bound as follows: ```@example SpeedMapping -using Optimization, OptimizationSpeedMapping +using Optimization, OptimizationSpeedMapping, ADTypes, ForwardDiff rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) p = [1.0, 100.0] -f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) +f = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) prob = OptimizationProblem(f, x0, p) sol = solve(prob, SpeedMappingOpt()) diff --git a/docs/src/tutorials/certification.md b/docs/src/tutorials/certification.md index 356e10c02..81f635b8a 100644 --- a/docs/src/tutorials/certification.md +++ b/docs/src/tutorials/certification.md @@ -7,13 +7,13 @@ This works with the `structural_analysis` keyword argument to `OptimizationProbl We'll use a simple example to illustrate the convexity structure certification process. ```@example symanalysis -using SymbolicAnalysis, Zygote, LinearAlgebra, Optimization, OptimizationLBFGSB +using SymbolicAnalysis, LinearAlgebra, OptimizationBase, OptimizationLBFGSB, ADTypes function f(x, p = nothing) return exp(x[1]) + x[1]^2 end -optf = OptimizationFunction(f, Optimization.AutoForwardDiff()) +optf = OptimizationFunction(f, ADTypes.AutoForwardDiff()) prob = OptimizationProblem(optf, [0.4], structural_analysis = true) sol = solve(prob, OptimizationLBFGSB.LBFGSB(), maxiters = 1000) @@ -30,8 +30,8 @@ Relatedly you can enable structural analysis in Riemannian optimization problems We'll look at the Riemannian center of mass of SPD matrices which is known to be a Geodesically Convex problem on the SPD manifold. ```@example symanalysis -using Optimization, OptimizationManopt, Symbolics, Manifolds, Random, LinearAlgebra, - SymbolicAnalysis +using OptimizationBase, OptimizationManopt, Symbolics, Manifolds, Random, LinearAlgebra, + SymbolicAnalysis, ADTypes M = SymmetricPositiveDefinite(5) m = 100 @@ -41,7 +41,7 @@ q = Matrix{Float64}(LinearAlgebra.I(5)) .+ 2.0 data2 = [exp(M, q, σ * rand(M; vector_at = q)) for i in 1:m]; f(x, p = nothing) = sum(SymbolicAnalysis.distance(M, data2[i], x)^2 for i in 1:5) -optf = OptimizationFunction(f, Optimization.AutoZygote()) +optf = OptimizationFunction(f, ADTypes.AutoZygote()) prob = OptimizationProblem(optf, data2[1]; manifold = M, structural_analysis = true) opt = OptimizationManopt.GradientDescentOptimizer() diff --git a/docs/src/tutorials/constraints.md b/docs/src/tutorials/constraints.md index 5510954db..c6ef4816f 100644 --- a/docs/src/tutorials/constraints.md +++ b/docs/src/tutorials/constraints.md @@ -16,8 +16,9 @@ x_1^2 + x_2^2 \leq 0.8 \\ ``` ```@example constraints -using Optimization, OptimizationMOI, OptimizationOptimJL, Ipopt +using OptimizationBase, OptimizationMOI, OptimizationOptimJL, Ipopt using ForwardDiff, ModelingToolkit +using DifferentiationInterface, ADTypes rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) @@ -33,7 +34,7 @@ cons(res, x, p) = (res .= [x[1]^2 + x[2]^2, x[1] * x[2]]) We'll use the `IPNewton` solver from Optim to solve the problem. ```@example constraints -optprob = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(), cons = cons) +optprob = OptimizationFunction(rosenbrock, DifferentiationInterface.SecondOrder(ADTypes.AutoForwardDiff(), ADTypes.AutoForwardDiff()), cons = cons) prob = OptimizationProblem(optprob, x0, _p, lcons = [-Inf, -1.0], ucons = [0.8, 2.0]) sol = solve(prob, IPNewton()) ``` @@ -81,7 +82,7 @@ x_1 * x_2 = 0.5 ``` ```@example constraints -optprob = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics(), cons = cons) +optprob = OptimizationFunction(rosenbrock, ADTypes.AutoSymbolics(), cons = cons) prob = OptimizationProblem(optprob, x0, _p, lcons = [1.0, 0.5], ucons = [1.0, 0.5]) ``` diff --git a/docs/src/tutorials/ensemble.md b/docs/src/tutorials/ensemble.md index 42b4215c9..0b7459bf1 100644 --- a/docs/src/tutorials/ensemble.md +++ b/docs/src/tutorials/ensemble.md @@ -8,16 +8,17 @@ This can be useful for complex, low dimensional problems. We demonstrate this, a We first execute a single local optimization with `OptimizationOptimJL.BFGS` and `maxiters=5`: ```@example ensemble -using Optimization, OptimizationOptimJL, Random +using OptimizationBase, OptimizationOptimJL, Random +using SciMLBase, ADTypes, ForwardDiff Random.seed!(100) rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 x0 = zeros(2) -optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) +optf = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) prob = OptimizationProblem(optf, x0, [1.0, 100.0]) -@time sol1 = Optimization.solve(prob, OptimizationOptimJL.BFGS(), maxiters = 5) +@time sol1 = solve(prob, OptimizationOptimJL.BFGS(), maxiters = 5) @show sol1.objective ``` @@ -30,8 +31,8 @@ function prob_func(prob, i, repeat) remake(prob, u0 = x0s[i]) end -ensembleprob = Optimization.EnsembleProblem(prob; prob_func) -@time sol = Optimization.solve(ensembleprob, OptimizationOptimJL.BFGS(), +ensembleprob = EnsembleProblem(prob; prob_func) +@time sol = solve(ensembleprob, OptimizationOptimJL.BFGS(), EnsembleThreads(), trajectories = 4, maxiters = 5) @show findmin(i -> sol[i].objective, 1:4)[1] ``` diff --git a/docs/src/tutorials/linearandinteger.md b/docs/src/tutorials/linearandinteger.md index f25dcb756..9faccf118 100644 --- a/docs/src/tutorials/linearandinteger.md +++ b/docs/src/tutorials/linearandinteger.md @@ -36,7 +36,7 @@ We need to consider the following constraints: The ultimate objective is to maximize the company's wealth in June, denoted by the variable `m`. ```@example linear -using Optimization, OptimizationMOI, ModelingToolkit, HiGHS, LinearAlgebra +using OptimizationBase, OptimizationMOI, ModelingToolkit, HiGHS, LinearAlgebra, SciMLBase @variables u[1:5] [bounds = (0.0, 100.0)] @variables v[1:3] [bounds = (0.0, Inf)] @@ -56,7 +56,7 @@ optprob = OptimizationProblem(optsys, vcat(fill(0.0, 13), 300.0); grad = true, hess = true, - sense = Optimization.MaxSense) + sense = SciMLBase.MaxSense) sol = solve(optprob, HiGHS.Optimizer()) ``` @@ -82,7 +82,7 @@ w &= [12,45,12,22,21] \\ which implies a maximization problem of binary variables $u_i$ with the objective as the dot product of `v` and `u` subject to a quadratic constraint on `u`. ```@example linear -using Juniper, Ipopt +using Juniper, Ipopt, ADTypes, Symbolics v = [10, 20, 12, 23, 42] w = [12, 45, 12, 22, 21] @@ -91,11 +91,11 @@ objective = (u, p) -> (v = p[1:5]; dot(v, u)) cons = (res, u, p) -> (w = p[6:10]; res .= [sum(w[i] * u[i]^2 for i in 1:5)]) -optf = OptimizationFunction(objective, Optimization.AutoSymbolics(), cons = cons) +optf = OptimizationFunction(objective, ADTypes.AutoSymbolics(), cons = cons) optprob = OptimizationProblem(optf, zeros(5), vcat(v, w); - sense = Optimization.MaxSense, + sense = SciMLBase.MaxSense, lb = zeros(5), ub = ones(5), lcons = [-Inf], diff --git a/docs/src/tutorials/minibatch.md b/docs/src/tutorials/minibatch.md index 8748bd066..6026c7c7a 100644 --- a/docs/src/tutorials/minibatch.md +++ b/docs/src/tutorials/minibatch.md @@ -9,8 +9,8 @@ It is possible to solve an optimization problem with batches using a `MLUtils.Da ```@example minibatch -using Lux, Optimization, OptimizationOptimisers, OrdinaryDiffEq, SciMLSensitivity, MLUtils, - Random, ComponentArrays +using Lux, OptimizationBase, OptimizationOptimisers, OrdinaryDiffEq, SciMLSensitivity, MLUtils, + Random, ComponentArrays, ADTypes, Zygote function newtons_cooling(du, u, p, t) temp = u[1] @@ -66,9 +66,9 @@ l1 = loss_adjoint(ps_ca, train_loader.data)[1] optfun = OptimizationFunction( loss_adjoint, - Optimization.AutoZygote()) + ADTypes.AutoZygote()) optprob = OptimizationProblem(optfun, ps_ca, train_loader) using IterTools: ncycle -res1 = Optimization.solve( +res1 = solve( optprob, Optimisers.ADAM(0.05); callback = callback, epochs = 1000) ``` diff --git a/docs/src/tutorials/remakecomposition.md b/docs/src/tutorials/remakecomposition.md index b46743d4c..bc41b7321 100644 --- a/docs/src/tutorials/remakecomposition.md +++ b/docs/src/tutorials/remakecomposition.md @@ -11,8 +11,8 @@ The SciML interface provides a `remake` function which allows you to recreate th Let's look at a 10 dimensional schwefel function in the hypercube $x_i \in [-500, 500]$. ```@example polyalg -using Optimization, OptimizationLBFGSB, Random -using OptimizationBBO, ReverseDiff +using OptimizationBase, OptimizationLBFGSB, Random +using OptimizationBBO, ADTypes, ReverseDiff Random.seed!(122333) @@ -24,7 +24,7 @@ function f_schwefel(x, p = [418.9829]) return result end -optf = OptimizationFunction(f_schwefel, AutoReverseDiff(compile = true)) +optf = OptimizationFunction(f_schwefel, ADTypes.AutoReverseDiff(compile = true)) x0 = ones(10) .* 200.0 prob = OptimizationProblem( diff --git a/docs/src/tutorials/reusage_interface.md b/docs/src/tutorials/reusage_interface.md index 8a20e87ec..92641b17e 100644 --- a/docs/src/tutorials/reusage_interface.md +++ b/docs/src/tutorials/reusage_interface.md @@ -8,12 +8,12 @@ The `reinit!` function allows you to efficiently reuse an existing optimization ```@example reinit # Create initial problem and cache -using Optimization, OptimizationOptimJL +using Optimization, OptimizationOptimJL, ADTypes, ForwardDiff rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2 u0 = zeros(2) p = [1.0, 100.0] -optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff()) +optf = OptimizationFunction(rosenbrock, ADTypes.AutoForwardDiff()) prob = OptimizationProblem(optf, u0, p) # Initialize cache and solve diff --git a/docs/src/tutorials/symbolic.md b/docs/src/tutorials/symbolic.md index 4da34d259..cf5a393b9 100644 --- a/docs/src/tutorials/symbolic.md +++ b/docs/src/tutorials/symbolic.md @@ -15,7 +15,7 @@ how to use the `OptimizationSystem` to construct optimized `OptimizationProblem` First we need to start by defining our symbolic variables, this is done as follows: ```@example modelingtoolkit -using ModelingToolkit, Optimization, OptimizationOptimJL +using ModelingToolkit, OptimizationBase, OptimizationOptimJL @variables x y @parameters a b diff --git a/lib/OptimizationSophia/src/OptimizationSophia.jl b/lib/OptimizationSophia/src/OptimizationSophia.jl index 93a2e8831..099755075 100644 --- a/lib/OptimizationSophia/src/OptimizationSophia.jl +++ b/lib/OptimizationSophia/src/OptimizationSophia.jl @@ -28,7 +28,7 @@ first-order methods like Adam and SGD while avoiding the computational cost of f ## Example ```julia -using OptimizationBase, OptimizationOptimisers +using OptimizationBase, OptimizationSophia # Define optimization problem rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 From cf96ce9f901082178e636a7f235c8cdec9b2c9d8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Tue, 25 Nov 2025 04:57:05 +0200 Subject: [PATCH 5/8] Add the HolyLabRegistry for QuadDIRECT --- .github/workflows/Documentation.yml | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/.github/workflows/Documentation.yml b/.github/workflows/Documentation.yml index e812da1fd..2d2341881 100644 --- a/.github/workflows/Documentation.yml +++ b/.github/workflows/Documentation.yml @@ -4,7 +4,7 @@ on: push: branches: - master - tags: '*' + tags: "*" pull_request: jobs: @@ -14,9 +14,11 @@ jobs: - uses: actions/checkout@v6 - uses: julia-actions/setup-julia@latest with: - version: '1' + version: "1" + - name: Add the HolyLabRegistry + run: julia --project -e 'using Pkg; Pkg.Registry.add(); Pkg.Registry.add(RegistrySpec(url = "https://github.com/HolyLab/HolyLabRegistry.git"))' - name: Install dependencies - run: julia --project=docs/ -e 'using Pkg; Pkg.develop(vcat(PackageSpec(path = pwd()), [PackageSpec(path = joinpath("lib", dir)) for dir in readdir("lib") if (dir !== "OptimizationQuadDIRECT" && dir !== "OptimizationMultistartOptimization")])); Pkg.instantiate()' + run: julia --project=docs/ -e 'using Pkg; Pkg.develop(vcat(PackageSpec(path = pwd()), [PackageSpec(path = joinpath("lib", dir)) for dir in readdir("lib") if (dir !== "OptimizationMultistartOptimization")])); Pkg.instantiate()' - name: Build and deploy env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token From 05acc84db6043874ea25940f67f6e40abb90ddf4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Tue, 25 Nov 2025 22:02:55 +0200 Subject: [PATCH 6/8] Make sure that all optimizers re-export OptimizationBase MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- lib/OptimizationAuglag/Project.toml | 10 ++++++---- lib/OptimizationAuglag/src/OptimizationAuglag.jl | 3 ++- lib/OptimizationBBO/src/OptimizationBBO.jl | 2 +- lib/OptimizationIpopt/Project.toml | 2 ++ lib/OptimizationIpopt/src/OptimizationIpopt.jl | 3 ++- lib/OptimizationLBFGSB/Project.toml | 2 ++ lib/OptimizationLBFGSB/src/OptimizationLBFGSB.jl | 3 ++- lib/OptimizationMadNLP/Project.toml | 2 ++ lib/OptimizationMadNLP/src/OptimizationMadNLP.jl | 3 ++- lib/OptimizationSophia/Project.toml | 2 ++ lib/OptimizationSophia/src/OptimizationSophia.jl | 3 ++- 11 files changed, 25 insertions(+), 10 deletions(-) diff --git a/lib/OptimizationAuglag/Project.toml b/lib/OptimizationAuglag/Project.toml index f6bd1b369..2b6fb8b4d 100644 --- a/lib/OptimizationAuglag/Project.toml +++ b/lib/OptimizationAuglag/Project.toml @@ -4,8 +4,9 @@ authors = ["paramthakkar123 "] version = "1.2.1" [deps] -OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" +OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" +Reexport = "189a3867-3050-52da-a836-e630ba90ab69" SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462" [extras] @@ -20,12 +21,13 @@ OptimizationOptimisers = {path = "../OptimizationOptimisers"} [compat] ForwardDiff = "1.0.1" -OptimizationBase = "4.0.2" -MLUtils = "0.4.8" LinearAlgebra = "1.10" +MLUtils = "0.4.8" +OptimizationBase = "4.0.2" OptimizationOptimisers = "0.3.8" -Test = "1.10.0" +Reexport = "1.2" SciMLBase = "2.122.1" +Test = "1.10.0" julia = "1.10" [targets] diff --git a/lib/OptimizationAuglag/src/OptimizationAuglag.jl b/lib/OptimizationAuglag/src/OptimizationAuglag.jl index 37c6a1a80..b2bb88ebe 100644 --- a/lib/OptimizationAuglag/src/OptimizationAuglag.jl +++ b/lib/OptimizationAuglag/src/OptimizationAuglag.jl @@ -1,7 +1,8 @@ module OptimizationAuglag +using Reexport using SciMLBase -using OptimizationBase +@reexport using OptimizationBase using SciMLBase: OptimizationProblem, OptimizationFunction, OptimizationStats using LinearAlgebra: norm diff --git a/lib/OptimizationBBO/src/OptimizationBBO.jl b/lib/OptimizationBBO/src/OptimizationBBO.jl index ed244ab9d..ddec14b53 100644 --- a/lib/OptimizationBBO/src/OptimizationBBO.jl +++ b/lib/OptimizationBBO/src/OptimizationBBO.jl @@ -1,7 +1,7 @@ module OptimizationBBO using Reexport -using OptimizationBase +@reexport using OptimizationBase using SciMLBase using BlackBoxOptim: BlackBoxOptim diff --git a/lib/OptimizationIpopt/Project.toml b/lib/OptimizationIpopt/Project.toml index d5794a33e..1257890dd 100644 --- a/lib/OptimizationIpopt/Project.toml +++ b/lib/OptimizationIpopt/Project.toml @@ -6,6 +6,7 @@ version = "0.2.6" Ipopt = "b6b21f68-93f8-5de0-b562-5493be1d77c9" LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" +Reexport = "189a3867-3050-52da-a836-e630ba90ab69" SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462" SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf" SymbolicIndexingInterface = "2efcf032-c050-4f8e-a9bb-153293bab1f5" @@ -15,6 +16,7 @@ Ipopt = "1.10.3" LinearAlgebra = "1.10.0" ModelingToolkit = "10.23" OptimizationBase = "3, 4" +Reexport = "1.2" SciMLBase = "2.122.1" SparseArrays = "1.10.0" SymbolicIndexingInterface = "0.3.40" diff --git a/lib/OptimizationIpopt/src/OptimizationIpopt.jl b/lib/OptimizationIpopt/src/OptimizationIpopt.jl index 6d4dbb648..f14feec38 100644 --- a/lib/OptimizationIpopt/src/OptimizationIpopt.jl +++ b/lib/OptimizationIpopt/src/OptimizationIpopt.jl @@ -1,6 +1,7 @@ module OptimizationIpopt -using OptimizationBase +using Reexport +@reexport using OptimizationBase using Ipopt using LinearAlgebra using SparseArrays diff --git a/lib/OptimizationLBFGSB/Project.toml b/lib/OptimizationLBFGSB/Project.toml index c2db76edb..0f4d66673 100644 --- a/lib/OptimizationLBFGSB/Project.toml +++ b/lib/OptimizationLBFGSB/Project.toml @@ -6,6 +6,7 @@ version = "1.2.1" DocStringExtensions = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae" LBFGSB = "5be7bae1-8223-5378-bac3-9e7378a2f6e6" OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" +Reexport = "189a3867-3050-52da-a836-e630ba90ab69" SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462" [extras] @@ -23,6 +24,7 @@ ForwardDiff = "1.0.1" LBFGSB = "0.4.1" MLUtils = "0.4.8" OptimizationBase = "4.0.2" +Reexport = "1.2" SciMLBase = "2.122.1" Zygote = "0.7.10" julia = "1.10" diff --git a/lib/OptimizationLBFGSB/src/OptimizationLBFGSB.jl b/lib/OptimizationLBFGSB/src/OptimizationLBFGSB.jl index f342c0345..8ea012a3f 100644 --- a/lib/OptimizationLBFGSB/src/OptimizationLBFGSB.jl +++ b/lib/OptimizationLBFGSB/src/OptimizationLBFGSB.jl @@ -1,6 +1,7 @@ module OptimizationLBFGSB -using OptimizationBase +using Reexport +@reexport using OptimizationBase using DocStringExtensions import LBFGSB as LBFGSBJL using SciMLBase: OptimizationStats, OptimizationFunction diff --git a/lib/OptimizationMadNLP/Project.toml b/lib/OptimizationMadNLP/Project.toml index f657a2a44..9f5db8288 100644 --- a/lib/OptimizationMadNLP/Project.toml +++ b/lib/OptimizationMadNLP/Project.toml @@ -8,6 +8,7 @@ LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6" NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6" OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" +Reexport = "189a3867-3050-52da-a836-e630ba90ab69" SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462" SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf" SymbolicIndexingInterface = "2efcf032-c050-4f8e-a9bb-153293bab1f5" @@ -20,6 +21,7 @@ MadNLP = "0.8.12" ModelingToolkit = "10.23" NLPModels = "0.21.5" OptimizationBase = "4.0.2" +Reexport = "1.2" SciMLBase = "2.122.1" SparseArrays = "1.10.0" SymbolicIndexingInterface = "0.3.40" diff --git a/lib/OptimizationMadNLP/src/OptimizationMadNLP.jl b/lib/OptimizationMadNLP/src/OptimizationMadNLP.jl index 70a53a94e..c80ef1c0b 100644 --- a/lib/OptimizationMadNLP/src/OptimizationMadNLP.jl +++ b/lib/OptimizationMadNLP/src/OptimizationMadNLP.jl @@ -1,6 +1,7 @@ module OptimizationMadNLP -using OptimizationBase +using Reexport +@reexport using OptimizationBase using OptimizationBase: MinSense, MaxSense, DEFAULT_CALLBACK using MadNLP using NLPModels diff --git a/lib/OptimizationSophia/Project.toml b/lib/OptimizationSophia/Project.toml index f0ed08a70..dc2cbd3c8 100644 --- a/lib/OptimizationSophia/Project.toml +++ b/lib/OptimizationSophia/Project.toml @@ -5,6 +5,7 @@ version = "1.2.1" [deps] OptimizationBase = "bca83a33-5cc9-4baa-983d-23429ab6bcbb" Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" +Reexport = "189a3867-3050-52da-a836-e630ba90ab69" SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462" [extras] @@ -23,6 +24,7 @@ MLUtils = "0.4.8" OptimizationBase = "4.0.2" OrdinaryDiffEqTsit5 = "1.2.0" Random = "1.10.0" +Reexport = "1.2" SciMLBase = "2.122.1" SciMLSensitivity = "7.88.0" Test = "1.10.0" diff --git a/lib/OptimizationSophia/src/OptimizationSophia.jl b/lib/OptimizationSophia/src/OptimizationSophia.jl index 099755075..34f2e8d4f 100644 --- a/lib/OptimizationSophia/src/OptimizationSophia.jl +++ b/lib/OptimizationSophia/src/OptimizationSophia.jl @@ -1,8 +1,9 @@ module OptimizationSophia +using Reexport using SciMLBase using OptimizationBase: OptimizationCache -using OptimizationBase +@reexport using OptimizationBase using Random """ From 43c9cf228be45f5e15f61ad44a627896b446ca82 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Wed, 26 Nov 2025 19:32:36 +0200 Subject: [PATCH 7/8] fix redirects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/src/optimization_packages/mathoptinterface.md | 2 +- docs/src/optimization_packages/nlopt.md | 6 +++--- docs/src/optimization_packages/pycma.md | 2 +- docs/src/optimization_packages/scipy.md | 2 +- docs/src/tutorials/certification.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/src/optimization_packages/mathoptinterface.md b/docs/src/optimization_packages/mathoptinterface.md index d5803a262..b633038dd 100644 --- a/docs/src/optimization_packages/mathoptinterface.md +++ b/docs/src/optimization_packages/mathoptinterface.md @@ -58,7 +58,7 @@ sol = solve(prob, Ipopt.Optimizer(); option_name = option_value, ...) #### KNITRO.jl (MathOptInterface) - [`KNITRO.Optimizer`](https://github.com/jump-dev/KNITRO.jl) - - The full list of optimizer options can be found in the [KNITRO Documentation](https://www.artelys.com/docs/knitro//3_referenceManual/callableLibraryAPI.html) + - The full list of optimizer options can be found in the [KNITRO Documentation](https://www.artelys.com/app/docs/knitro/3_referenceManual/callableLibraryAPI.html) #### Juniper.jl (MathOptInterface) diff --git a/docs/src/optimization_packages/nlopt.md b/docs/src/optimization_packages/nlopt.md index cb112af7f..b2d22886c 100644 --- a/docs/src/optimization_packages/nlopt.md +++ b/docs/src/optimization_packages/nlopt.md @@ -1,6 +1,6 @@ # NLopt.jl -[`NLopt`](https://github.com/JuliaOpt/NLopt.jl) is Julia package interfacing to the free/open-source [`NLopt library`](http://ab-initio.mit.edu/nlopt) which implements many optimization methods both global and local [`NLopt Documentation`](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/). +[`NLopt`](https://github.com/jump-dev/NLopt.jl) is Julia package interfacing to the free/open-source [`NLopt library`](http://ab-initio.mit.edu/nlopt/) which implements many optimization methods both global and local [`NLopt Documentation`](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/). ## Installation: OptimizationNLopt.jl @@ -139,7 +139,7 @@ sol = solve(prob, NLopt.LD_LBFGS()) ### Without Constraint Equations -The following algorithms in [`NLopt`](https://github.com/JuliaOpt/NLopt.jl) are performing global optimization on problems without +The following algorithms in [`NLopt`](https://github.com/jump-dev/NLopt.jl) are performing global optimization on problems without constraint equations. However, lower and upper constraints set by `lb` and `ub` in the `OptimizationProblem` are required. `NLopt` global optimizers which fall into this category are: @@ -192,7 +192,7 @@ sol = solve(prob, NLopt.G_MLSL_LDS(), local_method = NLopt.LD_LBFGS(), maxtime = ### With Constraint Equations -The following algorithms in [`NLopt`](https://github.com/JuliaOpt/NLopt.jl) are performing global optimization on problems with +The following algorithms in [`NLopt`](https://github.com/jump-dev/NLopt.jl) are performing global optimization on problems with constraint equations. However, lower and upper constraints set by `lb` and `ub` in the `OptimizationProblem` are required. !!! note "Constraints with NLopt" diff --git a/docs/src/optimization_packages/pycma.md b/docs/src/optimization_packages/pycma.md index 5e5bc3cbf..9c5472bff 100644 --- a/docs/src/optimization_packages/pycma.md +++ b/docs/src/optimization_packages/pycma.md @@ -2,7 +2,7 @@ [`PyCMA`](https://github.com/CMA-ES/pycma) is a Python implementation of CMA-ES and a few related numerical optimization tools. `OptimizationPyCMA.jl` gives access to the CMA-ES optimizer through the unified `Optimization.jl` interface just like any native Julia optimizer. -`OptimizationPyCMA.jl` relies on [`PythonCall`](https://github.com/cjdoris/PythonCall.jl). A minimal Python distribution containing PyCMA will be installed automatically on first use, so no manual Python set-up is required. +`OptimizationPyCMA.jl` relies on [`PythonCall`](https://github.com/JuliaPy/PythonCall.jl). A minimal Python distribution containing PyCMA will be installed automatically on first use, so no manual Python set-up is required. ## Installation: OptimizationPyCMA.jl diff --git a/docs/src/optimization_packages/scipy.md b/docs/src/optimization_packages/scipy.md index 896bc418a..f5ff51c04 100644 --- a/docs/src/optimization_packages/scipy.md +++ b/docs/src/optimization_packages/scipy.md @@ -4,7 +4,7 @@ !!! note - `OptimizationSciPy.jl` relies on [`PythonCall`](https://github.com/cjdoris/PythonCall.jl). A minimal Python distribution containing SciPy will be installed automatically on first use, so no manual Python set-up is required. + `OptimizationSciPy.jl` relies on [`PythonCall`](https://github.com/JuliaPy/PythonCall.jl). A minimal Python distribution containing SciPy will be installed automatically on first use, so no manual Python set-up is required. ## Installation: OptimizationSciPy.jl diff --git a/docs/src/tutorials/certification.md b/docs/src/tutorials/certification.md index 81f635b8a..56a90ceee 100644 --- a/docs/src/tutorials/certification.md +++ b/docs/src/tutorials/certification.md @@ -1,6 +1,6 @@ # Using SymbolicAnalysis.jl for convexity certificates -In this tutorial, we will show how to use automatic convexity certification of the optimization problem using [SymbolicAnalysis.jl](https://github.com/Vaibhavdixit02/SymbolicAnalysis.jl). +In this tutorial, we will show how to use automatic convexity certification of the optimization problem using [SymbolicAnalysis.jl](https://github.com/SciML/SymbolicAnalysis.jl). This works with the `structural_analysis` keyword argument to `OptimizationProblem`. This tells the package to try to trace through the objective and constraints with symbolic variables (for more details on this look at the [Symbolics documentation](https://symbolics.juliasymbolics.org/stable/manual/functions/#function_registration)). This relies on the Disciplined Programming approach hence neccessitates the use of "atoms" from the SymbolicAnalysis.jl package. From a65fd828cf60c514f47cfdf342e2e45b32ffd7d2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20Miclu=C8=9Ba-C=C3=A2mpeanu?= Date: Thu, 27 Nov 2025 02:06:41 +0200 Subject: [PATCH 8/8] bump Manopt for downgrade CI --- lib/OptimizationManopt/Project.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/OptimizationManopt/Project.toml b/lib/OptimizationManopt/Project.toml index d81490e72..a770093f9 100644 --- a/lib/OptimizationManopt/Project.toml +++ b/lib/OptimizationManopt/Project.toml @@ -30,7 +30,7 @@ OptimizationBase = {path = "../OptimizationBase"} [compat] julia = "1.10" DifferentiationInterface = "0.7" -Manopt = "0.5" +Manopt = "0.5.25" OptimizationBase = "4" LinearAlgebra = "1.10" ManifoldsBase = "1"