Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add checked autodiff for user-defined functions #2917

Merged
merged 6 commits into from
Mar 14, 2022
Merged

Add checked autodiff for user-defined functions #2917

merged 6 commits into from
Mar 14, 2022

Conversation

odow
Copy link
Member

@odow odow commented Mar 11, 2022

@ccoffrin's other suggestion in #2910

Closes #2910

TODOs

  • Benchmark
  • Tests

Assume we wrote a weird function that's only differentiable on part of it's domain:

julia> using JuMP

julia> import Ipopt

julia> function f(x)
           if x >= 1
               y = zeros(1)
               y[1] = (x - 1)^2
               return y[1]
           else
               return (x - 1)^2
           end
       end
f (generic function with 1 method)

julia> model = Model(Ipopt.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Ipopt

julia> @variable(model, x >= 0)
x

julia> @NLobjective(model, Min, f(x))
┌ Warning: Function f automatically registered with 1 arguments.
│ 
│ Calling the function with a different number of arguments will result in an
│ error.
│ 
│ While you can safely ignore this warning, we recommend that you manually
│ register the function as follows:```Julia
│ model = Model()
│ register(model, :f, 1, f; autodiff = true)
```
└ @ JuMP ~/.julia/dev/JuMP/src/parse_nlp.jl:21

julia> optimize!(model)

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        1

Total number of variables............................:        1
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  9.8010002e-01 0.00e+00 2.98e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  7.6134183e-01 0.00e+00 1.76e+00  -1.0 1.17e-01    -  3.61e-01 1.00e+00f  1
ERROR: JuMP's autodiff of the user-defined function f failed with a MethodError.

Common reasons for this include:

 * the function assumes `Float64` will be passed as input, it must work for any
   generic `Real` type.
 * the function allocates temporary storage using `zeros(3)` or similar. This
   defaults to `Float64`, so use `zeros(T, 3)` instead.

As an example, instead of:
```julia
function my_function(x::Float64...)
    y = zeros(length(x))
    for i in 1:length(x)
        y[i] = x[i]^2
    end
    return sum(y)
end
```
use:
```julia
function my_function(x::T...) where {T<:Real}
    y = zeros(T, length(x))
    for i in 1:length(x)
        y[i] = x[i]^2
    end
    return sum(y)
end
```

Review the stacktrace below for more information, but it can often be hard to
understand why and where your function is failing. You can also debug this
outside of JuMP as follows:
```julia
import ForwardDiff

# If the input dimension is 1
x = 1.0
my_function(a) = a^2
ForwardDiff.derivative(my_function, x)

# If the input dimension is more than 1
x = [1.0, 2.0]
my_function(a, b) = a^2 + b^2
ForwardDiff.gradient(x -> my_function(x...), x)
```

Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] _intercept_ForwardDiff_MethodError(#unused#::MethodError, s::Symbol)
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:1914
  [3] (::JuMP.var"#139#140"{typeof(f), Symbol})(x::Float64)
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:2003
  [4] eval_and_check_return_type(func::Function, RetType::Type, args::Float64)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:5
  [5] forward_eval(storage::Vector{Float64}, partials_storage::Vector{Float64}, nd::Vector{JuMP._Derivatives.NodeData}, adj::SparseArrays.SparseMatrixCSC{Bool, Int64}, const_values::Vector{Float64}, parameter_values::Vector{Float64}, x_values::Vector{Float64}, subexpression_values::Vector{Float64}, user_input_buffer::Vector{Float64}, user_output_buffer::Vector{Float64}, user_operators::JuMP._Derivatives.UserOperatorRegistry)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:213
  [6] _forward_eval_all(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:773
  [7] macro expansion
    @ ~/.julia/dev/JuMP/src/nlp.jl:826 [inlined]
  [8] macro expansion
    @ ./timing.jl:287 [inlined]
  [9] eval_objective(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:824
 [10] _eval_objective(model::Ipopt.Optimizer, x::Vector{Float64})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:821
 [11] (::Ipopt.var"#eval_f_cb#3"{Ipopt.Optimizer})(x::Vector{Float64})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:1081
 [12] _Eval_F_CB(n::Int32, x_ptr::Ptr{Float64}, x_new::Int32, obj_value::Ptr{Float64}, user_data::Ptr{Nothing})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/C_wrapper.jl:33
 [13] IpoptSolve(prob::Ipopt.IpoptProblem)
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/C_wrapper.jl:433
 [14] optimize!(model::Ipopt.Optimizer)
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:1225
 [15] optimize!
    @ ~/.julia/packages/MathOptInterface/FHFUH/src/Bridges/bridge_optimizer.jl:348 [inlined]
 [16] optimize!
    @ ~/.julia/packages/MathOptInterface/FHFUH/src/MathOptInterface.jl:81 [inlined]
 [17] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/FHFUH/src/Utilities/cachingoptimizer.jl:313
 [18] optimize!(model::Model; ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ JuMP ~/.julia/dev/JuMP/src/optimizer_interface.jl:161
 [19] optimize!(model::Model)
    @ JuMP ~/.julia/dev/JuMP/src/optimizer_interface.jl:143
 [20] top-level scope
    @ REPL[8]:1

caused by: MethodError: no method matching extract_derivative(::Type{ForwardDiff.Tag{typeof(f), Float64}}, ::String)
Closest candidates are:
  extract_derivative(::Type{T}, ::ForwardDiff.Dual) where T at /Users/oscar/.julia/packages/ForwardDiff/PBzup/src/derivative.jl:81
  extract_derivative(::Type{T}, ::Real) where T at /Users/oscar/.julia/packages/ForwardDiff/PBzup/src/derivative.jl:82
  extract_derivative(::Type{T}, ::AbstractArray) where T at /Users/oscar/.julia/packages/ForwardDiff/PBzup/src/derivative.jl:83
Stacktrace:
  [1] derivative
    @ ~/.julia/packages/ForwardDiff/PBzup/src/derivative.jl:14 [inlined]
  [2] (::JuMP.var"#139#140"{typeof(f), Symbol})(x::Float64)
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:2001
  [3] eval_and_check_return_type(func::Function, RetType::Type, args::Float64)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:5
  [4] forward_eval(storage::Vector{Float64}, partials_storage::Vector{Float64}, nd::Vector{JuMP._Derivatives.NodeData}, adj::SparseArrays.SparseMatrixCSC{Bool, Int64}, const_values::Vector{Float64}, parameter_values::Vector{Float64}, x_values::Vector{Float64}, subexpression_values::Vector{Float64}, user_input_buffer::Vector{Float64}, user_output_buffer::Vector{Float64}, user_operators::JuMP._Derivatives.UserOperatorRegistry)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:213
  [5] _forward_eval_all(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:773
  [6] macro expansion
    @ ~/.julia/dev/JuMP/src/nlp.jl:826 [inlined]
  [7] macro expansion
    @ ./timing.jl:287 [inlined]
  [8] eval_objective(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:824
  [9] _eval_objective(model::Ipopt.Optimizer, x::Vector{Float64})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:821
 [10] (::Ipopt.var"#eval_f_cb#3"{Ipopt.Optimizer})(x::Vector{Float64})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:1081
 [11] _Eval_F_CB(n::Int32, x_ptr::Ptr{Float64}, x_new::Int32, obj_value::Ptr{Float64}, user_data::Ptr{Nothing})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/C_wrapper.jl:33
 [12] IpoptSolve(prob::Ipopt.IpoptProblem)
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/C_wrapper.jl:433
 [13] optimize!(model::Ipopt.Optimizer)
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:1225
 [14] optimize!
    @ ~/.julia/packages/MathOptInterface/FHFUH/src/Bridges/bridge_optimizer.jl:348 [inlined]
 [15] optimize!
    @ ~/.julia/packages/MathOptInterface/FHFUH/src/MathOptInterface.jl:81 [inlined]
 [16] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/FHFUH/src/Utilities/cachingoptimizer.jl:313
 [17] optimize!(model::Model; ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ JuMP ~/.julia/dev/JuMP/src/optimizer_interface.jl:161
 [18] optimize!(model::Model)
    @ JuMP ~/.julia/dev/JuMP/src/optimizer_interface.jl:143
 [19] top-level scope
    @ REPL[8]:1

@codecov
Copy link

codecov bot commented Mar 11, 2022

Codecov Report

Merging #2917 (159910a) into master (3446496) will increase coverage by 0.07%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2917      +/-   ##
==========================================
+ Coverage   95.30%   95.38%   +0.07%     
==========================================
  Files          43       43              
  Lines        5778     5789      +11     
==========================================
+ Hits         5507     5522      +15     
+ Misses        271      267       -4     
Impacted Files Coverage Δ
src/nlp.jl 94.65% <100.00%> (+0.35%) ⬆️
src/_Derivatives/forward.jl 95.23% <0.00%> (+0.73%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3446496...159910a. Read the comment docs.

@odow
Copy link
Member Author

odow commented Mar 11, 2022

Another decision is what to do for other error types. One thing I see a lot is domain errors, particularly with log(0) (this isn't related just to user-defined functions either):

julia> using JuMP

julia> import Ipopt

julia> function f(x)
           if x >= 1
               return log(-x)
           else
               return (x - 1)^2
           end
       end
f (generic function with 1 method)

julia> model = Model(Ipopt.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Ipopt

julia> @variable(model, x >= 0)
x

julia> @NLobjective(model, Min, f(x))
┌ Warning: Function f automatically registered with 1 arguments.
│ 
│ Calling the function with a different number of arguments will result in an
│ error.
│ 
│ While you can safely ignore this warning, we recommend that you manually
│ register the function as follows:```Julia
│ model = Model()
│ register(model, :f, 1, f; autodiff = true)
```
└ @ JuMP ~/.julia/dev/JuMP/src/parse_nlp.jl:21

julia> optimize!(model)
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        1

Total number of variables............................:        1
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  9.8010002e-01 0.00e+00 2.98e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  7.6134183e-01 0.00e+00 1.76e+00  -1.0 1.17e-01    -  3.61e-01 1.00e+00f  1
ERROR: DomainError with -1.3445591286209058:
log will only return a complex result if called with a complex argument. Try log(Complex(x)).
Stacktrace:
  [1] throw_complex_domainerror(f::Symbol, x::Float64)
    @ Base.Math ./math.jl:33
  [2] log(x::Float64)
    @ Base.Math ./special/log.jl:285
  [3] f(x::Float64)
    @ Main ./REPL[19]:3
  [4] eval_and_check_return_type(func::Function, RetType::Type, args::Float64)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:5
  [5] forward_eval(storage::Vector{Float64}, partials_storage::Vector{Float64}, nd::Vector{JuMP._Derivatives.NodeData}, adj::SparseArrays.SparseMatrixCSC{Bool, Int64}, const_values::Vector{Float64}, parameter_values::Vector{Float64}, x_values::Vector{Float64}, subexpression_values::Vector{Float64}, user_input_buffer::Vector{Float64}, user_output_buffer::Vector{Float64}, user_operators::JuMP._Derivatives.UserOperatorRegistry)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:212
  [6] _forward_eval_all(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:773
  [7] macro expansion
    @ ~/.julia/dev/JuMP/src/nlp.jl:826 [inlined]
  [8] macro expansion
    @ ./timing.jl:287 [inlined]
  [9] eval_objective(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:824
 [10] _eval_objective(model::Ipopt.Optimizer, x::Vector{Float64})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:821
 [11] (::Ipopt.var"#eval_f_cb#3"{Ipopt.Optimizer})(x::Vector{Float64})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:1081
 [12] _Eval_F_CB(n::Int32, x_ptr::Ptr{Float64}, x_new::Int32, obj_value::Ptr{Float64}, user_data::Ptr{Nothing})
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/C_wrapper.jl:33
 [13] IpoptSolve(prob::Ipopt.IpoptProblem)
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/C_wrapper.jl:433
 [14] optimize!(model::Ipopt.Optimizer)
    @ Ipopt ~/.julia/packages/Ipopt/M2QE8/src/MOI_wrapper.jl:1225
 [15] optimize!
    @ ~/.julia/packages/MathOptInterface/FHFUH/src/Bridges/bridge_optimizer.jl:348 [inlined]
 [16] optimize!
    @ ~/.julia/packages/MathOptInterface/FHFUH/src/MathOptInterface.jl:81 [inlined]
 [17] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/FHFUH/src/Utilities/cachingoptimizer.jl:313
 [18] optimize!(model::Model; ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ JuMP ~/.julia/dev/JuMP/src/optimizer_interface.jl:161
 [19] optimize!(model::Model)
    @ JuMP ~/.julia/dev/JuMP/src/optimizer_interface.jl:143
 [20] top-level scope
    @ REPL[23]:1

@odow
Copy link
Member Author

odow commented Mar 13, 2022

Performance seems unaffected:

julia> using Revise

julia> using JuMP

julia> function f(x)
           if x >= 1
               y = zeros(1)
               y[1] = (x - 1)^2
               return y[1]
           else
               return (x - 1)^2
           end
       end
f (generic function with 1 method)

julia> model = Model();

julia> @variable(model, x >= 0);

julia> register(model, :f, 1, f; autodiff = true);

julia> @NLobjective(model, Min, f(x));

julia> nlp = NLPEvaluator(model);

julia> MOI.initialize(nlp, Symbol[])

julia> using BenchmarkTools

julia> g = [NaN]
1-element Vector{Float64}:
 NaN

julia> @benchmark MOI.eval_objective_gradient($nlp, g, [0.0])
BenchmarkTools.Trial: 10000 samples with 842 evaluations.
 Range (min  max):  143.952 ns    4.653 μs  ┊ GC (min  max): 0.00%  93.51%
 Time  (median):     150.334 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   165.010 ns ± 127.393 ns  ┊ GC (mean ± σ):  2.54% ±  3.29%

  ▄█▇▅▃▁ ▁      ▁                                               ▁
  ██████████▇▇█████▇▆▆▇▇▅▇▆▆▇█▇▇▆▅▆▅▆▅▄▆▄▅▆▆▄▆▆▅▅▆▆▅▄▅▄▅▅▅▄▄▅▅▅ █
  144 ns        Histogram: log(frequency) by time        339 ns <

 Memory estimate: 96 bytes, allocs estimate: 1.

julia> @benchmark MOI.eval_objective_gradient($nlp, g, [2.0])
ERROR: JuMP's autodiff of the user-defined function f failed with a MethodError.

Common reasons for this include:

 * the function assumes `Float64` will be passed as input, it must work for any
   generic `Real` type.
 * the function allocates temporary storage using `zeros(3)` or similar. This
   defaults to `Float64`, so use `zeros(T, 3)` instead.

As an example, instead of:
```julia
function my_function(x::Float64...)
    y = zeros(length(x))
    for i in 1:length(x)
        y[i] = x[i]^2
    end
    return sum(y)
end
```
use:
```julia
function my_function(x::T...) where {T<:Real}
    y = zeros(T, length(x))
    for i in 1:length(x)
        y[i] = x[i]^2
    end
    return sum(y)
end
```

Review the stacktrace below for more information, but it can often be hard to
understand why and where your function is failing. You can also debug this
outside of JuMP as follows:
```julia
import ForwardDiff

# If the input dimension is 1
x = 1.0
my_function(a) = a^2
ForwardDiff.derivative(my_function, x)

# If the input dimension is more than 1
x = [1.0, 2.0]
my_function(a, b) = a^2 + b^2
ForwardDiff.gradient(x -> my_function(x...), x)
```

Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] _intercept_ForwardDiff_MethodError(#unused#::MethodError, s::Symbol)
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:1914
  [3] (::JuMP.var"#139#140"{typeof(f), Symbol})(x::Float64)
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:2003
  [4] eval_and_check_return_type(func::Function, RetType::Type, args::Float64)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:5
  [5] forward_eval(storage::Vector{Float64}, partials_storage::Vector{Float64}, nd::Vector{JuMP._Derivatives.NodeData}, adj::SparseArrays.SparseMatrixCSC{Bool, Int64}, const_values::Vector{Float64}, parameter_values::Vector{Float64}, x_values::Vector{Float64}, subexpression_values::Vector{Float64}, user_input_buffer::Vector{Float64}, user_output_buffer::Vector{Float64}, user_operators::JuMP._Derivatives.UserOperatorRegistry)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:213
  [6] _forward_eval_all(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:773
  [7] macro expansion
    @ ~/.julia/dev/JuMP/src/nlp.jl:842 [inlined]
  [8] macro expansion
    @ ./timing.jl:287 [inlined]
  [9] eval_objective_gradient(d::NLPEvaluator, g::Vector{Float64}, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:840
 [10] var"##core#267"(nlp#266::NLPEvaluator)
    @ Main ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:489
 [11] var"##sample#268"(::Tuple{NLPEvaluator}, __params::BenchmarkTools.Parameters)
    @ Main ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:495
 [12] _run(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; verbose::Bool, pad::String, kwargs::Base.Iterators.Pairs{Symbol, Integer, NTuple{4, Symbol}, NamedTuple{(:samples, :evals, :gctrial, :gcsample), Tuple{Int64, Int64, Bool, Bool}}})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:99
 [13] #invokelatest#2
    @ ./essentials.jl:710 [inlined]
 [14] #run_result#45
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:34 [inlined]
 [15] run(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; progressid::Nothing, nleaves::Float64, ndone::Float64, kwargs::Base.Iterators.Pairs{Symbol, Integer, NTuple{5, Symbol}, NamedTuple{(:verbose, :samples, :evals, :gctrial, :gcsample), Tuple{Bool, Int64, Int64, Bool, Bool}}})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:117
 [16] #warmup#54
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:169 [inlined]
 [17] warmup(item::BenchmarkTools.Benchmark)
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:169
 [18] top-level scope
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:393

caused by: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  (::Type{T})(::T) where T<:Number at boot.jl:760
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
  ...
Stacktrace:
  [1] convert(#unused#::Type{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
    @ Base ./number.jl:7
  [2] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, i1::Int64)
    @ Base ./array.jl:839
  [3] f(x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
    @ Main ./REPL[3]:4
  [4] derivative
    @ ~/.julia/packages/ForwardDiff/PBzup/src/derivative.jl:14 [inlined]
  [5] (::JuMP.var"#139#140"{typeof(f), Symbol})(x::Float64)
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:2001
  [6] eval_and_check_return_type(func::Function, RetType::Type, args::Float64)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:5
  [7] forward_eval(storage::Vector{Float64}, partials_storage::Vector{Float64}, nd::Vector{JuMP._Derivatives.NodeData}, adj::SparseArrays.SparseMatrixCSC{Bool, Int64}, const_values::Vector{Float64}, parameter_values::Vector{Float64}, x_values::Vector{Float64}, subexpression_values::Vector{Float64}, user_input_buffer::Vector{Float64}, user_output_buffer::Vector{Float64}, user_operators::JuMP._Derivatives.UserOperatorRegistry)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:213
  [8] _forward_eval_all(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:773
  [9] macro expansion
    @ ~/.julia/dev/JuMP/src/nlp.jl:842 [inlined]
 [10] macro expansion
    @ ./timing.jl:287 [inlined]
 [11] eval_objective_gradient(d::NLPEvaluator, g::Vector{Float64}, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:840
 [12] var"##core#267"(nlp#266::NLPEvaluator)
    @ Main ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:489
 [13] var"##sample#268"(::Tuple{NLPEvaluator}, __params::BenchmarkTools.Parameters)
    @ Main ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:495
 [14] _run(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; verbose::Bool, pad::String, kwargs::Base.Iterators.Pairs{Symbol, Integer, NTuple{4, Symbol}, NamedTuple{(:samples, :evals, :gctrial, :gcsample), Tuple{Int64, Int64, Bool, Bool}}})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:99
 [15] #invokelatest#2
    @ ./essentials.jl:710 [inlined]
 [16] #run_result#45
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:34 [inlined]
 [17] run(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; progressid::Nothing, nleaves::Float64, ndone::Float64, kwargs::Base.Iterators.Pairs{Symbol, Integer, NTuple{5, Symbol}, NamedTuple{(:verbose, :samples, :evals, :gctrial, :gcsample), Tuple{Bool, Int64, Int64, Bool, Bool}}})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:117
 [18] #warmup#54
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:169 [inlined]
 [19] warmup(item::BenchmarkTools.Benchmark)
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:169
 [20] top-level scope
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:393

shell> git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.

julia> using Revise

julia> using JuMP

julia> function f(x)
           if x >= 1
               y = zeros(1)
               y[1] = (x - 1)^2
               return y[1]
           else
               return (x - 1)^2
           end
       end
f (generic function with 1 method)

julia> model = Model();

julia> @variable(model, x >= 0);

julia> register(model, :f, 1, f; autodiff = true);

julia> @NLobjective(model, Min, f(x));

julia> nlp = NLPEvaluator(model);

julia> MOI.initialize(nlp, Symbol[])

julia> using BenchmarkTools

julia> g = [NaN]
1-element Vector{Float64}:
 NaN

julia> @benchmark MOI.eval_objective_gradient($nlp, g, [0.0])
BenchmarkTools.Trial: 10000 samples with 829 evaluations.
 Range (min  max):  144.097 ns    5.181 μs  ┊ GC (min  max): 0.00%  96.85%
 Time  (median):     152.604 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   168.937 ns ± 140.527 ns  ┊ GC (mean ± σ):  2.66% ±  3.19%

  ▅▇█▇▅▃▂▂▂▂   ▁  ▁ ▁                                           ▂
  ██████████▇█▇█████████▇▇▇█▇▇▇▇▇▇▇▇▇▇▇▇▆▇▇▆▆▆▇▆▆▆▅▆▅▆▅▅▅▅▃▅▆▅▅ █
  144 ns        Histogram: log(frequency) by time        328 ns <

 Memory estimate: 96 bytes, allocs estimate: 1.

julia> @benchmark MOI.eval_objective_gradient($nlp, g, [2.0])
ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  (::Type{T})(::T) where T<:Number at boot.jl:760
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
  ...
Stacktrace:
  [1] convert(#unused#::Type{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
    @ Base ./number.jl:7
  [2] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, i1::Int64)
    @ Base ./array.jl:839
  [3] f(x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
    @ Main ./REPL[17]:4
  [4] derivative
    @ ~/.julia/packages/ForwardDiff/PBzup/src/derivative.jl:14 [inlined]
  [5] (::JuMP.var"#226#228"{typeof(f)})(x::Float64)
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:2015
  [6] eval_and_check_return_type(func::Function, RetType::Type, args::Float64)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:5
  [7] forward_eval(storage::Vector{Float64}, partials_storage::Vector{Float64}, nd::Vector{JuMP._Derivatives.NodeData}, adj::SparseArrays.SparseMatrixCSC{Bool, Int64}, const_values::Vector{Float64}, parameter_values::Vector{Float64}, x_values::Vector{Float64}, subexpression_values::Vector{Float64}, user_input_buffer::Vector{Float64}, user_output_buffer::Vector{Float64}, user_operators::JuMP._Derivatives.UserOperatorRegistry)
    @ JuMP._Derivatives ~/.julia/dev/JuMP/src/_Derivatives/forward.jl:213
  [8] _forward_eval_all(d::NLPEvaluator, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:773
  [9] macro expansion
    @ ~/.julia/dev/JuMP/src/nlp.jl:842 [inlined]
 [10] macro expansion
    @ ./timing.jl:287 [inlined]
 [11] eval_objective_gradient(d::NLPEvaluator, g::Vector{Float64}, x::Vector{Float64})
    @ JuMP ~/.julia/dev/JuMP/src/nlp.jl:840
 [12] var"##core#281"(nlp#280::NLPEvaluator)
    @ Main ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:489
 [13] var"##sample#282"(::Tuple{NLPEvaluator}, __params::BenchmarkTools.Parameters)
    @ Main ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:495
 [14] _run(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; verbose::Bool, pad::String, kwargs::Base.Iterators.Pairs{Symbol, Integer, NTuple{4, Symbol}, NamedTuple{(:samples, :evals, :gctrial, :gcsample), Tuple{Int64, Int64, Bool, Bool}}})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:99
 [15] #invokelatest#2
    @ ./essentials.jl:710 [inlined]
 [16] #run_result#45
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:34 [inlined]
 [17] run(b::BenchmarkTools.Benchmark, p::BenchmarkTools.Parameters; progressid::Nothing, nleaves::Float64, ndone::Float64, kwargs::Base.Iterators.Pairs{Symbol, Integer, NTuple{5, Symbol}, NamedTuple{(:verbose, :samples, :evals, :gctrial, :gcsample), Tuple{Bool, Int64, Int64, Bool, Bool}}})
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:117
 [18] #warmup#54
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:169 [inlined]
 [19] warmup(item::BenchmarkTools.Benchmark)
    @ BenchmarkTools ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:169
 [20] top-level scope
    @ ~/.julia/packages/BenchmarkTools/7xSXH/src/execution.jl:393

@odow odow changed the title WIP: add checked autodiff for user-defined functions Add checked autodiff for user-defined functions Mar 13, 2022
@odow odow mentioned this pull request Mar 14, 2022
1 task
@odow odow merged commit 8c925d1 into master Mar 14, 2022
@odow odow deleted the od/checked-udf branch March 14, 2022 21:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

NLP Interface Error Message with Registered Functions
1 participant