Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: LoadError: AssertionError: isfinite(phi_c) && isfinite(dphi_c) #168

Closed
benxro opened this issue Jul 7, 2022 · 2 comments · Fixed by #172
Closed

ERROR: LoadError: AssertionError: isfinite(phi_c) && isfinite(dphi_c) #168

benxro opened this issue Jul 7, 2022 · 2 comments · Fixed by #172
Assignees

Comments

@benxro
Copy link

benxro commented Jul 7, 2022

Sometimes when I run the Emulate-Step I get the error message from the title with the stacktrace below.

This sounds very similar to the problems encountered here and here. The problem seems to be the HagerZhang line search algorithm that is used by default by Optim.optimize(). The proposed solution was to choose a different line search algorithm.
However, I did not find an option to do so from the perspective of the CES package.

I'm new to Julia but I tried adapting the function "optimize_hyperparameters()" in src/GaussianProcess.jl but couldn't manage to pass an alternative line search algorithm down to the Optim.optimize() method.

Is there currently the option to change the line search algorithm from within the CES package?
Is this a known problem and are there ways to prevent this (e.g. preprocessing or filtering the data)?

Unfortunately, I'm not able to produce a minimal working example but could upload a data container for which this happens.
Also, referring to my "sometimes" earlier this happens deterministically. it's just not clear to me when it happens as it does not seem to depend on the number of training points or certain outliers.

With kind regards
Ben

ERROR: LoadError: AssertionError: isfinite(phi_c) && isfinite(dphi_c)
Stacktrace:
  [1] secant2!(ϕdϕ::LineSearches.var"#ϕdϕ#6"{Optim.ManifoldObjective{NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}}, alphas::Vector{Float64}, values::Vector{Float64}, slopes::Vector{Float64}, ia::Int64, ib::Int64, phi_lim::Float64, delta::Float64, sigma::Float64, display::Int64)
    @ LineSearches ~/.julia/packages/LineSearches/Ki4c5/src/hagerzhang.jl:369
  [2] (::LineSearches.HagerZhang{Float64, Base.RefValue{Bool}})(ϕ::Function, ϕdϕ::LineSearches.var"#ϕdϕ#6"{Optim.ManifoldObjective{NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}}, c::Float64, phi_0::Float64, dphi_0::Float64)
    @ LineSearches ~/.julia/packages/LineSearches/Ki4c5/src/hagerzhang.jl:269
  [3] HagerZhang
    @ ~/.julia/packages/LineSearches/Ki4c5/src/hagerzhang.jl:101 [inlined]
  [4] perform_linesearch!(state::Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, d::Optim.ManifoldObjective{NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}})
    @ Optim ~/.julia/packages/Optim/6Lpjy/src/utilities/perform_linesearch.jl:59
  [5] update_state!(d::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}, state::Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"})
    @ Optim ~/.julia/packages/Optim/6Lpjy/src/multivariate/solvers/first_order/l_bfgs.jl:204
  [6] optimize(d::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}, initial_x::Vector{Float64}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing}, state::Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}})
    @ Optim ~/.julia/packages/Optim/6Lpjy/src/multivariate/optimize/optimize.jl:54
  [7] optimize(d::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}, initial_x::Vector{Float64}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing}) (repeats 2 times)
    @ Optim ~/.julia/packages/Optim/6Lpjy/src/multivariate/optimize/optimize.jl:36
  [8] optimize!(::GPE{Matrix{Float64}, Vector{Float64}, MeanZero, SEArd{Float64}, GaussianProcesses.FullCovariance, GaussianProcesses.StationaryARDData{Array{Float64, 3}}, PDMats.PDMat{Float64, Matrix{Float64}}, GaussianProcesses.Scalar{Float64}}, ::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}; domean::Bool, kern::Bool, noise::Bool, lik::Bool, meanbounds::Nothing, kernbounds::Nothing, noisebounds::Nothing, likbounds::Nothing, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ GaussianProcesses ~/.julia/packages/GaussianProcesses/kzIIW/src/optimize.jl:28
  [9] optimize_hyperparameters!(gp::GaussianProcess{GPJL})
    @ CalibrateEmulateSample.Emulators ~/ml-in-climate-science/Project/CalibrateEmulateSample.jl/src/GaussianProcess.jl:182
 [10] optimize_hyperparameters!(emulator::Emulator{Float64})
    @ CalibrateEmulateSample.Emulators ~/ml-in-climate-science/Project/CalibrateEmulateSample.jl/src/Emulator.jl:154
 [11] top-level scope
    @ ~/ml-in-climate-science/Project/CES_vs_SBI/04_CES/Emulate-Sample.jl:194
in expression starting at /home/ben/ml-in-climate-science/Project/CES_vs_SBI/04_CES/Emulate-Sample.jl:194
@odunbar
Copy link
Collaborator

odunbar commented Jul 20, 2022

Hi Ben - thanks for finding this.

We have had plenty of stability issues with GP hyperparameter optimization with GaussianProcesses.jl. This is the primary reason we still keep the python alternative (scikit-learn) which is typically is far more robust. However I'll add the option to pass parameters to Optim - it seems like we can use an args, kwargs setup that GaussianProcesses.jl expects here:
https://github.com/STOR-i/GaussianProcesses.jl/blob/c226196ccbe5117b5a3f32c036178a450c024eb2/src/optimize.jl#L19-L37

The only annoyance I can see with this function is they use the default on a positional argument for the method - which makes passing args more difficult if we do not wish to have CES dependency on Optim

@benxro
Copy link
Author

benxro commented Jul 28, 2022

Great, thank you very much for fixing this! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants