Skip to content

Commit

Permalink
Merge pull request #612 from SciML/prima
Browse files Browse the repository at this point in the history
[WIP] Add PRIMA wrapper
  • Loading branch information
Vaibhavdixit02 committed Oct 21, 2023
2 parents 6d625af + c2c61af commit a4c049e
Show file tree
Hide file tree
Showing 20 changed files with 621 additions and 163 deletions.
1 change: 1 addition & 0 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ jobs:
- OptimizationNOMAD
- OptimizationOptimJL
- OptimizationOptimisers
- OptimizationPRIMA
- OptimizationQuadDIRECT
- OptimizationSpeedMapping
- OptimizationPolyalgorithms
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
.DS_Store
/Manifest.toml
Manifest.toml
/dev/
/docs/build/
.vscode
2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ packages.
| Metaheuristics ||||||| 🟡 |
| NOMAD ||||||| 🟡 |
| NLopt ||||| 🟡 || 🟡 |
| Nonconvex ||||| 🟡 || 🟡 |
| Optim ||||||||
| PRIMA ||||||||
| QuadDIRECT ||||||||

✅ = supported
Expand Down
2 changes: 1 addition & 1 deletion docs/src/optimization_packages/blackboxoptim.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# BlackBoxOptim.jl

[`BlackBoxOptim`](https://github.com/robertfeldt/BlackBoxOptim.jl) is a Julia package implementing **(Meta-)heuristic/stochastic algorithms** that do not require for the optimized function to be differentiable.
[`BlackBoxOptim`](https://github.com/robertfeldt/BlackBoxOptim.jl) is a Julia package implementing **(Meta-)heuristic/stochastic algorithms** that do not require differentiability.

## Installation: OptimizationBBO.jl

Expand Down
85 changes: 0 additions & 85 deletions docs/src/optimization_packages/nonconvex.md

This file was deleted.

49 changes: 49 additions & 0 deletions docs/src/optimization_packages/prima.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# PRIMA.jl

[PRIMA.jl](https://github.com/libprima/PRIMA.jl) is a julia wrapper for the fortran library [prima](https://github.com/libprima/prima) which implements Powell's derivative free optimization methods.

## Installation: OptimizationPRIMA

To use this package, install the OptimizationPRIMA package:

```julia
import Pkg;
Pkg.add("OptimizationPRIMA");
```

## Local Optimizer

The five Powell's algorithms of the prima library are provided by the PRIMA.jl package:

`UOBYQA`: (Unconstrained Optimization BY Quadratic Approximations) is for unconstrained optimization, that is Ω = ℝⁿ.

`NEWUOA`: is also for unconstrained optimization. According to M.J.D. Powell, newuoa is superior to uobyqa.

`BOBYQA`: (Bounded Optimization BY Quadratic Approximations) is for simple bound constrained problems, that is Ω = { x ∈ ℝⁿ | xl ≤ x ≤ xu }.

`LINCOA`: (LINearly Constrained Optimization) is for constrained optimization problems with bound constraints, linear equality constraints, and linear inequality constraints.

`COBYLA`: (Constrained Optimization BY Linear Approximations) is for general constrained problems with bound constraints, non-linear constraints, linear equality constraints, and linear inequality constraints.

```@example PRIMA
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
_p = [1.0, 100.0]
prob = OptimizationProblem(rosenbrock, x0, _p)
sol = Optimization.solve(prob, UOBYQA(), maxiters = 1000)
sol = Optimization.solve(prob, NEWUOA(), maxiters = 1000)
sol = Optimization.solve(prob, BOBYQA(), maxiters = 1000)
sol = Optimization.solve(prob, LINCOA(), maxiters = 1000)
function con2_c(res, x, p)
res .= [x[1] + x[2], x[2] * sin(x[1]) - x[1]]
end
optprob = OptimizationFunction(rosenbrock, AutoForwardDiff(), cons = con2_c)
prob = OptimizationProblem(optprob, x0, _p, lcons = [1, -100], ucons = [1, 100])
sol = Optimization.solve(prob, COBYLA(), maxiters = 1000)
```
5 changes: 2 additions & 3 deletions ext/OptimizationEnzymeExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import Optimization.LinearAlgebra: I
import Optimization.ADTypes: AutoEnzyme
isdefined(Base, :get_extension) ? (using Enzyme) : (using ..Enzyme)

@inline function firstapply(f::F, θ, p, args...) where F
@inline function firstapply(f::F, θ, p, args...) where {F}
res = f(θ, p, args...)
if isa(res, AbstractFloat)
res
Expand All @@ -18,9 +18,8 @@ end
function Optimization.instantiate_function(f::OptimizationFunction{true}, x,
adtype::AutoEnzyme, p,
num_cons = 0)

if f.grad === nothing
grad = let
grad = let
function (res, θ, args...)
res .= zero(eltype(res))
Enzyme.autodiff(Enzyme.Reverse,
Expand Down
43 changes: 33 additions & 10 deletions ext/OptimizationReverseDiffExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ function Optimization.instantiate_function(f, x, adtype::AutoReverseDiff,
p = SciMLBase.NullParameters(),
num_cons = 0)
_f = (θ, args...) -> first(f.f(θ, p, args...))

chunksize = default_chunk_size(length(x))

if f.grad === nothing
Expand All @@ -33,16 +33,23 @@ function Optimization.instantiate_function(f, x, adtype::AutoReverseDiff,
end
else
cfg = ReverseDiff.GradientConfig(x)
grad = (res, θ, args...) -> ReverseDiff.gradient!(res, x -> _f(x, args...), θ, cfg)
grad = (res, θ, args...) -> ReverseDiff.gradient!(res,
x -> _f(x, args...),
θ,
cfg)
end
else
grad = (G, θ, args...) -> f.grad(G, θ, p, args...)
end

if f.hess === nothing
if adtype.compile
T = ForwardDiff.Tag(OptimizationReverseDiffTag(),eltype(x))
xdual = ForwardDiff.Dual{typeof(T),eltype(x),chunksize}.(x, Ref(ForwardDiff.Partials((ones(eltype(x), chunksize)...,))))
T = ForwardDiff.Tag(OptimizationReverseDiffTag(), eltype(x))
xdual = ForwardDiff.Dual{
typeof(T),
eltype(x),
chunksize,
}.(x, Ref(ForwardDiff.Partials((ones(eltype(x), chunksize)...,))))
h_tape = ReverseDiff.GradientTape(_f, xdual)
htape = ReverseDiff.compile(h_tape)
function g(θ)
Expand Down Expand Up @@ -110,7 +117,10 @@ function Optimization.instantiate_function(f, x, adtype::AutoReverseDiff,
ReverseDiff.gradient!(res1, htape, θ)
end
gs = [x -> grad_cons(x, conshtapes[i]) for i in 1:num_cons]
jaccfgs = [ForwardDiff.JacobianConfig(gs[i], x, ForwardDiff.Chunk{chunksize}(), T) for i in 1:num_cons]
jaccfgs = [ForwardDiff.JacobianConfig(gs[i],
x,
ForwardDiff.Chunk{chunksize}(),
T) for i in 1:num_cons]
cons_h = function (res, θ)
for i in 1:num_cons
ForwardDiff.jacobian!(res[i], gs[i], θ, jaccfgs[i], Val{false}())
Expand Down Expand Up @@ -155,23 +165,33 @@ function Optimization.instantiate_function(f, cache::Optimization.ReInitCache,
end
else
cfg = ReverseDiff.GradientConfig(cache.u0)
grad = (res, θ, args...) -> ReverseDiff.gradient!(res, x -> _f(x, args...), θ, cfg)
grad = (res, θ, args...) -> ReverseDiff.gradient!(res,
x -> _f(x, args...),
θ,
cfg)
end
else
grad = (G, θ, args...) -> f.grad(G, θ, cache.p, args...)
end

if f.hess === nothing
if adtype.compile
T = ForwardDiff.Tag(OptimizationReverseDiffTag(),eltype(cache.u0))
xdual = ForwardDiff.Dual{typeof(T),eltype(cache.u0),chunksize}.(cache.u0, Ref(ForwardDiff.Partials((ones(eltype(cache.u0), chunksize)...,))))
T = ForwardDiff.Tag(OptimizationReverseDiffTag(), eltype(cache.u0))
xdual = ForwardDiff.Dual{
typeof(T),
eltype(cache.u0),
chunksize,
}.(cache.u0, Ref(ForwardDiff.Partials((ones(eltype(cache.u0), chunksize)...,))))
h_tape = ReverseDiff.GradientTape(_f, xdual)
htape = ReverseDiff.compile(h_tape)
function g(θ)
res1 = zeros(eltype(θ), length(θ))
ReverseDiff.gradient!(res1, htape, θ)
end
jaccfg = ForwardDiff.JacobianConfig(g, cache.u0, ForwardDiff.Chunk{chunksize}(), T)
jaccfg = ForwardDiff.JacobianConfig(g,
cache.u0,
ForwardDiff.Chunk{chunksize}(),
T)
hess = function (res, θ, args...)
ForwardDiff.jacobian!(res, g, θ, jaccfg, Val{false}())
end
Expand Down Expand Up @@ -232,7 +252,10 @@ function Optimization.instantiate_function(f, cache::Optimization.ReInitCache,
ReverseDiff.gradient!(res1, htape, θ)
end
gs = [x -> grad_cons(x, conshtapes[i]) for i in 1:num_cons]
jaccfgs = [ForwardDiff.JacobianConfig(gs[i], cache.u0, ForwardDiff.Chunk{chunksize}(), T) for i in 1:num_cons]
jaccfgs = [ForwardDiff.JacobianConfig(gs[i],
cache.u0,
ForwardDiff.Chunk{chunksize}(),
T) for i in 1:num_cons]
cons_h = function (res, θ)
for i in 1:num_cons
ForwardDiff.jacobian!(res[i], gs[i], θ, jaccfgs[i], Val{false}())
Expand Down

0 comments on commit a4c049e

Please sign in to comment.