Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-promotion does not work while using NeuralODE likelihood in Turing.jl #893

Open
canbozdogan opened this issue Dec 29, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@canbozdogan
Copy link

Describe the bug 🐞

When using a NeuralODE model within Turing.jl, specifically with parameters and data in Float64, the auto-promotion of parameters to Dual type for AD seems not to work correctly. This results in a type conversion error (MethodError related to converting ForwardDiff.Dual to Float64) during the sampling process.

Expected behavior
The expectation is that the parameters of the NeuralODE would automatically promote to the appropriate Dual type for AD, as they do in the usual ODEProblem. This should happen without manual intervention, ensuring compatibility with Turing.jl's sampling process.

Minimal Reproducible Example 👇

You can download trained NeuralODE parameters from here:
nn_parameters.zip

using DifferentialEquations, LinearAlgebra, JLD2, Turing
using Lux, Statistics, Plots, ForwardDiff 
using Random, DiffEqFlux, ComponentArrays, Optimization, OptimizationOptimisers


C_0 = 0.0014
rel_start = 0
rel_end = 10
N = 10
tspan = (rel_start,rel_end)
datasize = 50
tsteps = range(tspan[1], tspan[2]; length = datasize)
y_0 = fill(C_0, N)


function diffusion!(dOx, Ox, p, t)
    C_eq, h, N, D_z = p

    h0 = h / N
    N = Int(N)

    dOx[1] = (D_z * (Ox[2] - Ox[1]) - D_z * (Ox[1] - C_eq)) / h0^2
    dOx[2:N-1] = (D_z * (Ox[3:N] - Ox[2:N-1]) - D_z * (Ox[2:N-1] - Ox[1:N-2])) / h0^2
    dOx[N] = - D_z * (Ox[N] - Ox[N-1]) / h0^2
end

p_ode = [40.6205, 0.0015, 10, 1.9340e-08]

J_prototype = Tridiagonal(fill(1.0, N-1), fill(1.0, N), fill(1.0, N-1))
fun =  ODEFunction(diffusion!, jac_prototype=J_prototype)
prob = ODEProblem(fun,  y_0, tspan, p_ode)
ode_data = Array(solve(prob,Rodas5();saveat=tsteps))

true_ode_data = mean(ode_data, dims=1)


struct DiffusionLayer <: Lux.AbstractExplicitLayer
    C_eq::Float64
    h::Float64
    D_z::Float64

end


function DiffusionLayer(C_eq, h, D_z)
    return DiffusionLayer(C_eq, h, D_z)
end

# No trainable parameters in this layer
Lux.initialparameters(::AbstractRNG, ::DiffusionLayer) = NamedTuple()

# States are the ODE parameters C_eq, h, D_z, and N
Lux.initialstates(::AbstractRNG, layer::DiffusionLayer) = (C_eq=layer.C_eq, h=layer.h, D_z=layer.D_z)

function (layer::DiffusionLayer)(x, ps, st)
    N = length(x)
    h0 = st.h / N
    D = st.D_z / h0^2


    main_diag = [i == 1 ? -3 * D : i == N ? -D : -2 * D for i in 1:N]

    off_diag = fill(D, N - 1)

    A = Tridiagonal(off_diag, main_diag, off_diag)

    modified_x = vcat(x[1] - st.C_eq, x[2:end])

    dOx = A * modified_x


    return dOx, st
end

nn = Lux.Chain(
    DiffusionLayer(40.6205, 0.0015, 1.9340e-08),
    Lux.Dense(N, 50, tanh),
    Lux.Dense(50, N)
)


rng = Xoshiro(31)
p_nn , st = Lux.setup(rng, nn)


neural_ode = NeuralODE(nn, tspan, Tsit5(), saveat=tsteps)
neural_ode(y_0, p_nn, st)


function predict_neuralode(y_0,p_nn,st)
    
    solution = first(neural_ode(y_0, p_nn, st)) 

    return reduce(hcat,solution.u), solution.retcode
end

nn_param = load_object("nn_parameters.jld2")

@model function inverse_model(data)

    global st
    # Prior distribution
    D_z ~ Normal(1.9340e-08 , 1e-7) 

    ct = (layer_1 = (C_eq = 40.6205, h = 0.0015, D_z = D_z), layer_2 = NamedTuple(), 
    layer_3 = NamedTuple())


    predicted, retcode = predict_neuralode(y_0, nn_param.u, ct)
    avg_concentration = mean(predicted, dims=1)[:]


    if retcode !== ReturnCode.Success 
        Turing.@addlogprob! -Inf
        return nothing
    end


    data[:] ~ MvNormal(avg_concentration,1)

    return nothing
end

model = inverse_model(true_ode_data)
chain = sample(model, NUTS(), MCMCSerial(), 100, 1; progress=true)

Error & Stacktrace ⚠️

julia> chain = sample(model, NUTS(), MCMCSerial(), 100, 1; progress=true)
ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, 
Float64}, Float64, 1})

Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat
   @ Base rounding.jl:207
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number}
   @ Base char.jl:50
  ...

Stacktrace:
  [1] convert(#unused#::Type{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1})
    @ Base .\number.jl:7
  [2] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, 
Float64, 1}, i1::Int64)
    @ Base .\array.jl:969
  [3] _unsafe_copyto!(dest::Vector{Float64}, doffs::Int64, src::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}, soffs::Int64, n::Int64)
    @ Base .\array.jl:250
  [4] unsafe_copyto!
    @ .\array.jl:304 [inlined]
  [5] _copyto_impl!
    @ .\array.jl:327 [inlined]
  [6] copyto!
    @ .\array.jl:314 [inlined]
  [7] copyto!
    @ .\array.jl:339 [inlined]
  [8] copyto_axcheck!
    @ .\abstractarray.jl:1182 [inlined]
  [9] Vector{Float64}(x::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}})
    @ Base .\array.jl:621
 [10] convert
    @ .\array.jl:613 [inlined]
 [11] setproperty!
    @ .\Base.jl:38 [inlined]
 [12] initialize!(integrator::OrdinaryDiffEq.ODEIntegrator{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, false, Vector{Float64}, Nothing, Float64, ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, Float64, 
Float64, Float64, Float64, Vector{Vector{Float64}}, ODESolution{Float64, 2, Vector{Vector{Float64}}, Nothing, Nothing, Vector{Float64}, Vector{Vector{Vector{Float64}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, false, ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#24"{Lux.Experimental.StatefulLuxLayer{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Nothing, NamedTuple{(:layer_1, :layer_2, :layer_3), 
Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, OrdinaryDiffEq.InterpolationData{ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#24"{Lux.Experimental.StatefulLuxLayer{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Nothing, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Vector{Vector{Float64}}, Vector{Float64}, Vector{Vector{Vector{Float64}}}, OrdinaryDiffEq.Tsit5ConstantCache}, DiffEqBase.Stats, Nothing}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#24"{Lux.Experimental.StatefulLuxLayer{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Nothing, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, OrdinaryDiffEq.Tsit5ConstantCache, OrdinaryDiffEq.DEOptions{Float64, Float64, Float64, Float64, PIController{Rational{Int64}}, typeof(DiffEqBase.ODE_DEFAULT_NORM), typeof(opnorm), Nothing, CallbackSet{Tuple{}, Tuple{}}, typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), DataStructures.BinaryHeap{Float64, DataStructures.FasterForward}, DataStructures.BinaryHeap{Float64, DataStructures.FasterForward}, Nothing, Nothing, Int64, 
Tuple{}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Tuple{}}, Vector{Float64}, Float64, Nothing, OrdinaryDiffEq.DefaultInit}, cache::OrdinaryDiffEq.Tsit5ConstantCache)
    @ OrdinaryDiffEq C:\Users\bozdoc\.julia\packages\OrdinaryDiffEq\FFFcA\src\perform_step\low_order_rk_perform_step.jl:726
 [13] __init(prob::ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, false, ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#24"{Lux.Experimental.StatefulLuxLayer{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Nothing, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, alg::Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, timeseries_init::Tuple{}, ts_init::Tuple{}, ks_init::Tuple{}, recompile::Type{Val{true}}; saveat::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, tstops::Tuple{}, d_discontinuities::Tuple{}, save_idxs::Nothing, save_everystep::Bool, save_on::Bool, save_start::Bool, save_end::Nothing, callback::Nothing, dense::Bool, calck::Bool, dt::Float64, dtmin::Nothing, dtmax::Float64, force_dtmin::Bool, adaptive::Bool, gamma::Rational{Int64}, abstol::Nothing, reltol::Nothing, qmin::Rational{Int64}, qmax::Int64, qsteady_min::Int64, qsteady_max::Int64, beta1::Nothing, beta2::Nothing, qoldinit::Rational{Int64}, controller::Nothing, fullnormalize::Bool, failfactor::Int64, maxiters::Int64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), internalopnorm::typeof(opnorm), isoutofdomain::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), unstable_check::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), verbose::Bool, timeseries_errors::Bool, dense_errors::Bool, advance_to_tstop::Bool, stop_at_next_tstop::Bool, initialize_save::Bool, progress::Bool, progress_steps::Int64, progress_name::String, progress_message::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), progress_id::Symbol, userdata::Nothing, allow_extrapolation::Bool, initialize_integrator::Bool, alias_u0::Bool, alias_du0::Bool, initializealg::OrdinaryDiffEq.DefaultInit, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})  
    @ OrdinaryDiffEq C:\Users\bozdoc\.julia\packages\OrdinaryDiffEq\FFFcA\src\solve.jl:502
 [14] __init (repeats 5 times)
    @ C:\Users\bozdoc\.julia\packages\OrdinaryDiffEq\FFFcA\src\solve.jl:10 [inlined]
 [15] __solve(::ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, false, ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#24"{Lux.Experimental.StatefulLuxLayer{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Nothing, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), 
Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, 
NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, ::Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}; kwargs::Base.Pairs{Symbol, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Tuple{Symbol}, NamedTuple{(:saveat,), Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}})
    @ OrdinaryDiffEq C:\Users\bozdoc\.julia\packages\OrdinaryDiffEq\FFFcA\src\solve.jl:5
 [16] __solve
    @ C:\Users\bozdoc\.julia\packages\OrdinaryDiffEq\FFFcA\src\solve.jl:1 [inlined]
 [17] #solve_call#34
    @ C:\Users\bozdoc\.julia\packages\DiffEqBase\s433k\src\solve.jl:559 [inlined]
 [18] solve_up(prob::ODEProblem{Vector{Float64}, Tuple{Int64, Int64}, false, ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#24"{Lux.Experimental.StatefulLuxLayer{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Nothing, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, 
Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, sensealg::InterpolatingAdjoint{0, true, Val{:central}, ZygoteVJP}, u0::Vector{Float64}, p::ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, args::Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}; kwargs::Base.Pairs{Symbol, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Tuple{Symbol}, NamedTuple{(:saveat,), 
Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}})  
    @ DiffEqBase C:\Users\bozdoc\.julia\packages\DiffEqBase\s433k\src\solve.jl:1020
 [19] solve_up
    @ C:\Users\bozdoc\.julia\packages\DiffEqBase\s433k\src\solve.jl:993 [inlined]
 [20] solve(prob::ODEProblem{Vector{Float64}, Tuple{Int64, Int64}, false, ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#24"{Lux.Experimental.StatefulLuxLayer{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Nothing, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, args::Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}; sensealg::InterpolatingAdjoint{0, true, Val{:central}, ZygoteVJP}, u0::Nothing, p::Nothing, wrap::Val{true}, kwargs::Base.Pairs{Symbol, 
StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Tuple{Symbol}, NamedTuple{(:saveat,), Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}})
    @ DiffEqBase C:\Users\bozdoc\.julia\packages\DiffEqBase\s433k\src\solve.jl:930
 [21] solve
    @ C:\Users\bozdoc\.julia\packages\DiffEqBase\s433k\src\solve.jl:920 [inlined]
 [22] (::NeuralODE{Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{DiffusionLayer, Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing}, Tuple{Int64, Int64}, Tuple{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}}, Base.Pairs{Symbol, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Tuple{Symbol}, NamedTuple{(:saveat,), Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}}})(x::Vector{Float64}, p::ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, st::NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}})
    @ DiffEqFlux C:\Users\bozdoc\.julia\packages\DiffEqFlux\7OfDv\src\neural_de.jl:53
 [23] predict_neuralode(y_0::Vector{Float64}, p_nn::ComponentVector{Float32, Vector{Float32}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:176, Axis(weight = ViewAxis(1:160, ShapedAxis((16, 10), NamedTuple())), bias = ViewAxis(161:176, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(177:346, Axis(weight = ViewAxis(1:160, ShapedAxis((10, 16), NamedTuple())), bias = ViewAxis(161:170, ShapedAxis((10, 1), NamedTuple())))))}}}, st::NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:C_eq, :h, :D_z), Tuple{Float64, Float64, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}})
    @ Main c:\Users\bozdoc\Desktop\CanRepo\Phase-1\minimalworking.jl:92
 [24] inverse_model(__model__::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, __varinfo__::DynamicPPL.ThreadSafeVarInfo{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}, Vector{Set{DynamicPPL.Selector}}}}}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}, Vector{Base.RefValue{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}}, 
__context__::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG}, data::Matrix{Float64})
    @ Main c:\Users\bozdoc\Desktop\CanRepo\Phase-1\minimalworking.jl:109
 [25] _evaluate!!(model::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.ThreadSafeVarInfo{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}, Vector{Set{DynamicPPL.Selector}}}}}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}, Vector{Base.RefValue{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}}, context::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG})
    @ DynamicPPL C:\Users\bozdoc\.julia\packages\DynamicPPL\oX6N7\src\model.jl:963
 [26] evaluate_threadsafe!!(model::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}, Vector{Set{DynamicPPL.Selector}}}}}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}, context::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG})
    @ DynamicPPL C:\Users\bozdoc\.julia\packages\DynamicPPL\oX6N7\src\model.jl:952
 [27] evaluate!!
    @ C:\Users\bozdoc\.julia\packages\DynamicPPL\oX6N7\src\model.jl:887 [inlined]
 [28] logdensity(f::LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG}}, θ::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}})
    @ DynamicPPL C:\Users\bozdoc\.julia\packages\DynamicPPL\oX6N7\src\logdensityfunction.jl:94      
 [29] Fix1
    @ .\operators.jl:1108 [inlined]
 [30] vector_mode_dual_eval!
    @ C:\Users\bozdoc\.julia\packages\ForwardDiff\PcZ48\src\apiutils.jl:24 [inlined]
 [31] vector_mode_gradient!
    @ C:\Users\bozdoc\.julia\packages\ForwardDiff\PcZ48\src\gradient.jl:96 [inlined]
 [32] gradient!(result::DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}}, f::Base.Fix1{typeof(LogDensityProblems.logdensity), LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), 
Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG}}}, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}, ::Val{true})
    @ ForwardDiff C:\Users\bozdoc\.julia\packages\ForwardDiff\PcZ48\src\gradient.jl:37
 [33] gradient!
    @ C:\Users\bozdoc\.julia\packages\ForwardDiff\PcZ48\src\gradient.jl:35 [inlined]
 [34] logdensity_and_gradient
    @ C:\Users\bozdoc\.julia\packages\LogDensityProblemsAD\OQ0BL\ext\LogDensityProblemsADForwardDiffExt.jl:118 [inlined]
 [35] ∂logπ∂θ
    @ C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\hmc.jl:160 [inlined]
 [36] ∂H∂θ
    @ C:\Users\bozdoc\.julia\packages\AdvancedHMC\LJv94\src\hamiltonian.jl:38 [inlined]
 [37] phasepoint(h::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, AdvancedHMC.GaussianKinetic, Base.Fix1{typeof(LogDensityProblems.logdensity), LogDensityProblemsADForwardDiffExt.ForwardDiffLogDensity{LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG}}, ForwardDiff.Chunk{1}, ForwardDiff.Tag{Turing.TuringTag, Float64}, ForwardDiff.GradientConfig{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}}}, Turing.Inference.var"#∂logπ∂θ#36"{LogDensityProblemsADForwardDiffExt.ForwardDiffLogDensity{LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG}}, ForwardDiff.Chunk{1}, ForwardDiff.Tag{Turing.TuringTag, Float64}, ForwardDiff.GradientConfig{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}}}}, θ::Vector{Float64}, r::Vector{Float64})
    @ AdvancedHMC C:\Users\bozdoc\.julia\packages\AdvancedHMC\LJv94\src\hamiltonian.jl:80
 [38] phasepoint(rng::TaskLocalRNG, θ::Vector{Float64}, h::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, AdvancedHMC.GaussianKinetic, Base.Fix1{typeof(LogDensityProblems.logdensity), LogDensityProblemsADForwardDiffExt.ForwardDiffLogDensity{LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG}}, ForwardDiff.Chunk{1}, ForwardDiff.Tag{Turing.TuringTag, Float64}, ForwardDiff.GradientConfig{ForwardDiff.Tag{Turing.TuringTag, Float64}, 
Float64, 1, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}}}}, Turing.Inference.var"#∂logπ∂θ#36"{LogDensityProblemsADForwardDiffExt.ForwardDiffLogDensity{LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, TaskLocalRNG}}, ForwardDiff.Chunk{1}, ForwardDiff.Tag{Turing.TuringTag, Float64}, ForwardDiff.GradientConfig{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 
1}}}}}})
    @ AdvancedHMC C:\Users\bozdoc\.julia\packages\AdvancedHMC\LJv94\src\hamiltonian.jl:159
 [39] initialstep(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:D_z,), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:D_z, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}; init_params::Nothing, nadapts::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Turing.Inference C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\hmc.jl:164
 [40] step(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}; resume_from::Nothing, init_params::Nothing, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
    @ DynamicPPL C:\Users\bozdoc\.julia\packages\DynamicPPL\oX6N7\src\sampler.jl:111
 [41] step
    @ C:\Users\bozdoc\.julia\packages\DynamicPPL\oX6N7\src\sampler.jl:84 [inlined]
 [42] macro expansion
    @ C:\Users\bozdoc\.julia\packages\AbstractMCMC\fWWW0\src\sample.jl:125 [inlined]
 [43] macro expansion
    @ C:\Users\bozdoc\.julia\packages\ProgressLogging\6KXlp\src\ProgressLogging.jl:328 [inlined]    
 [44] macro expansion
    @ C:\Users\bozdoc\.julia\packages\AbstractMCMC\fWWW0\src\logging.jl:9 [inlined]
 [45] mcmcsample(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, kwargs::Base.Pairs{Symbol, Union{Nothing, Int64}, Tuple{Symbol, Symbol}, NamedTuple{(:nadapts, :init_params), Tuple{Int64, Nothing}}})
    @ AbstractMCMC C:\Users\bozdoc\.julia\packages\AbstractMCMC\fWWW0\src\sample.jl:116
 [46] mcmcsample
    @ C:\Users\bozdoc\.julia\packages\AbstractMCMC\fWWW0\src\sample.jl:95 [inlined]
 [47] #sample#34
    @ C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\hmc.jl:121 [inlined]
 [48] sample
    @ C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\hmc.jl:91 [inlined]
 [49] (::AbstractMCMC.var"#sample_chain#78"{String, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:chain_type, :progress), Tuple{UnionAll, Bool}}}, TaskLocalRNG, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, Int64, Int64})(i::Int64, seed::UInt64, init_params::Nothing)
    @ AbstractMCMC C:\Users\bozdoc\.julia\packages\AbstractMCMC\fWWW0\src\sample.jl:511
 [50] sample_chain
    @ C:\Users\bozdoc\.julia\packages\AbstractMCMC\fWWW0\src\sample.jl:508 [inlined]
 [51] #4
    @ .\generator.jl:36 [inlined]
 [52] iterate
    @ .\generator.jl:47 [inlined]
 [53] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{UnitRange{Int64}, Vector{UInt64}}}, Base.var"#4#5"{AbstractMCMC.var"#sample_chain#78"{String, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, 
NamedTuple{(:chain_type, :progress), Tuple{UnionAll, Bool}}}, TaskLocalRNG, DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, Int64, 
Int64}}})
    @ Base .\array.jl:782
 [54] map
    @ .\abstractarray.jl:3385 [inlined]
 [55] mcmcsample(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, ::MCMCSerial, N::Int64, nchains::Int64; progressname::String, init_params::Nothing, kwargs::Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:chain_type, :progress), Tuple{UnionAll, Bool}}})
    @ AbstractMCMC C:\Users\bozdoc\.julia\packages\AbstractMCMC\fWWW0\src\sample.jl:523
 [56] sample(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(inverse_model), (:data,), (), (), Tuple{Matrix{Float64}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, ensemble::MCMCSerial, N::Int64, n_chains::Int64; chain_type::Type, progress::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Turing.Inference C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\Inference.jl:265
 [57] sample
    @ C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\Inference.jl:254 [inlined]
 [58] #sample#6
    @ C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\Inference.jl:250 [inlined]
 [59] sample
    @ C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\Inference.jl:241 [inlined]
 [60] #sample#5
    @ C:\Users\bozdoc\.julia\packages\Turing\UCuzt\src\mcmc\Inference.jl:237 [inlined]

Environment (please complete the following information):

  • Output of using Pkg; Pkg.status()
julia> Pkg.status()
Status `C:\Users\bozdoc\Desktop\CanRepo\MyJuliaProject\Project.toml`
  [70b36510] AutomaticDocstrings v1.0.5
  [336ed68f] CSV v0.10.11
  [052768ef] CUDA v5.1.1
  [c3611d14] ColorVectorSpace v0.10.0
⌃ [b0b7db55] ComponentArrays v0.15.6
  [a93c6f00] DataFrames v1.6.1
  [31a5f54b] Debugger v0.7.8
⌃ [aae7a2af] DiffEqFlux v3.2.0
  [071ae1c0] DiffEqGPU v3.3.0
⌃ [0c46a032] DifferentialEquations v7.10.0
  [31c24e10] Distributions v0.25.104
  [ffbed154] DocStringExtensions v0.9.3
⌅ [28b8d3ca] GR v0.72.10
  [c27321d9] Glob v1.3.1
  [5903a43b] Infiltrator v1.6.4
  [2fda8390] LsqFit v0.15.0
  [b2108857] Lux v0.5.13
  [d0bbae9a] LuxCUDA v0.3.1
  [94925ecb] MethodOfLines v0.10.4
⌃ [961ee093] ModelingToolkit v8.70.0
  [3bd65402] Optimisers v0.3.1
⌃ [7f7a1694] Optimization v3.19.3
  [36348300] OptimizationOptimJL v0.1.14
  [42dfb2eb] OptimizationOptimisers v0.1.6
  [a03496cd] PlotlyBase v0.8.19
  [f0f68f2c] PlotlyJS v0.18.11
  [91a5bcdd] Plots v1.39.0
  [aea7be01] PrecompileTools v1.2.0
  [92933f4c] ProgressMeter v1.9.0
  [efcf1570] Setfield v1.1.1
  [6303bc30] Signals v1.2.0
⌅ [2913bbd2] StatsBase v0.33.21
  [f3b207a7] StatsPlots v0.15.6
⌃ [c3572dad] Sundials v4.20.1
⌃ [fce5fe82] Turing v0.29.3
  [ea0860ee] TuringCallbacks v0.4.0
⌃ [b8865327] UnicodePlots v3.6.0
  [e88e6eb3] Zygote v0.6.68
  [fa267f1f] TOML v1.0.3
  • Output of using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
julia> Pkg.status(; mode = PKGMODE_MANIFEST)
Status `C:\Users\bozdoc\Desktop\CanRepo\MyJuliaProject\Manifest.toml`
⌃ [47edcb42] ADTypes v0.2.5
  [c3fe647b] AbstractAlgebra v0.34.7
  [621f4979] AbstractFFTs v1.5.0
⌅ [80f14c24] AbstractMCMC v4.4.2
⌅ [7a57a42e] AbstractPPL v0.6.2
  [1520ce14] AbstractTrees v0.4.4
⌅ [79e6a3ab] Adapt v3.7.2
⌅ [0bf59076] AdvancedHMC v0.4.4
⌅ [5b7e9947] AdvancedMH v0.7.5
⌅ [576499cb] AdvancedPS v0.4.3
  [b5ca4192] AdvancedVI v0.2.4
  [dce04be8] ArgCheck v2.3.0
  [ec485272] ArnoldiMethod v0.2.0
  [7d9fca2a] Arpack v0.5.4
  [4fba245c] ArrayInterface v7.7.0
  [4c555306] ArrayLayouts v1.4.5
  [bf4720bc] AssetRegistry v0.1.0
  [a9b6321e] Atomix v0.1.0
  [70b36510] AutomaticDocstrings v1.0.5
⌃ [13072b0f] AxisAlgorithms v1.0.1
  [39de3d68] AxisArrays v0.4.7
  [ab4f0b2a] BFloat16s v0.4.2
⌅ [aae01518] BandedMatrices v0.17.38
  [198e06fe] BangBang v0.3.39
  [9718e550] Baselet v0.1.1
  [e2ed5e7c] Bijections v0.1.6
  [76274a88] Bijectors v0.13.8
  [d1d4a3ce] BitFlags v0.1.8
  [62783981] BitTwiddlingConvenienceFunctions v0.1.5
  [ad839575] Blink v0.12.8
⌅ [764a87c0] BoundaryValueDiffEq v4.0.1
  [e1450e63] BufferedStreams v1.2.1
⌅ [fa961155] CEnum v0.4.2
  [2a0fbf3d] CPUSummary v0.2.4
  [00ebfdb7] CSTParser v3.3.6
  [336ed68f] CSV v0.10.11
  [052768ef] CUDA v5.1.1
  [1af6417a] CUDA_Runtime_Discovery v0.2.2
  [49dc2e85] Calculus v0.5.1
  [7057c7e9] Cassette v0.3.12
  [082447d4] ChainRules v1.58.1
  [d360d2e6] ChainRulesCore v1.19.0
  [9e997f8a] ChangesOfVariables v0.1.8
  [fb6a15b2] CloseOpenIntervals v0.1.12
  [aaaa29a8] Clustering v0.15.6
  [da1fd8a2] CodeTracking v1.3.5
  [944b1d66] CodecZlib v0.7.3
  [35d6a980] ColorSchemes v3.24.0
  [3da002f7] ColorTypes v0.11.4
  [c3611d14] ColorVectorSpace v0.10.0
  [5ae59095] Colors v0.12.10
  [861a8166] Combinatorics v1.0.2
  [a80b9123] CommonMark v0.8.12
  [38540f10] CommonSolve v0.2.4
  [bbf7d656] CommonSubexpressions v0.3.0
  [34da2185] Compat v4.10.1
⌃ [b0b7db55] ComponentArrays v0.15.6
  [b152e2b5] CompositeTypes v0.1.3
  [a33af91c] CompositionsBase v0.1.2
  [2569d6c7] ConcreteStructs v0.2.3
  [f0e56b4a] ConcurrentUtilities v2.3.0
  [88cd18e8] ConsoleProgressMonitor v0.1.2
  [187b0558] ConstructionBase v1.5.4
  [d38c429a] Contour v0.6.2
  [adafc99b] CpuId v0.3.1
  [a8cc5b0e] Crayons v4.1.1
  [9a962f9c] DataAPI v1.15.0
  [a93c6f00] DataFrames v1.6.1
  [864edb3b] DataStructures v0.18.15
  [e2d170a0] DataValueInterfaces v1.0.0
  [31a5f54b] Debugger v0.7.8
  [244e2a9f] DefineSingletons v0.1.2
⌃ [bcd4f6db] DelayDiffEq v5.44.0
  [8bb1440f] DelimitedFiles v1.9.1
  [b429d917] DensityInterface v0.4.0
⌃ [2b5f629d] DiffEqBase v6.130.0
⌃ [459566f4] DiffEqCallbacks v2.35.0
⌃ [aae7a2af] DiffEqFlux v3.2.0
  [071ae1c0] DiffEqGPU v3.3.0
  [77a26b50] DiffEqNoiseProcess v5.20.0
  [163ba53b] DiffResults v1.1.0
  [b552c78f] DiffRules v1.15.1
⌃ [0c46a032] DifferentialEquations v7.10.0
  [b4f34e82] Distances v0.10.11
  [31c24e10] Distributions v0.25.104
  [ced4e74d] DistributionsAD v0.6.53
  [ffbed154] DocStringExtensions v0.9.3
⌅ [5b8099bc] DomainSets v0.6.7
  [fa6b7ba4] DualNumbers v0.6.8
⌅ [366bfd00] DynamicPPL v0.23.21
  [7c1d4256] DynamicPolynomials v0.5.3
  [da5c29d0] EllipsisNotation v1.8.0
⌅ [cad2338a] EllipticalSliceSampling v1.1.0
  [4e289a0a] EnumX v1.0.4
  [7da242da] Enzyme v0.11.11
  [f151be2c] EnzymeCore v0.6.4
  [460bff9d] ExceptionUnwrapping v0.1.9
  [d4d017d3] ExponentialUtilities v1.25.0
  [e2ba6199] ExprTools v0.1.10
  [c87230d0] FFMPEG v0.4.1
  [7a1cc6ca] FFTW v1.7.2
  [7034ab61] FastBroadcast v0.2.8
  [9aa1b823] FastClosures v0.3.2
  [29a986be] FastLapackInterface v2.0.0
  [5789e2e9] FileIO v1.16.1
  [48062228] FilePathsBase v0.9.21
  [1a297f60] FillArrays v1.9.3
⌃ [6a86dc24] FiniteDiff v2.21.1
  [53c48c17] FixedPointNumbers v0.8.4
  [59287772] Formatting v0.4.2
  [f6369f11] ForwardDiff v0.10.36
  [f62d2435] FunctionProperties v0.1.0
  [069b7b12] FunctionWrappers v1.1.3
  [77dc65aa] FunctionWrappersWrappers v0.1.3
  [de31a74c] FunctionalCollections v0.5.0
  [d9f16b24] Functors v0.4.5
⌅ [0c68f7d7] GPUArrays v9.1.0
⌅ [46192b85] GPUArraysCore v0.1.5
  [61eb1bfa] GPUCompiler v0.25.0
⌅ [28b8d3ca] GR v0.72.10
  [c145ed77] GenericSchur v0.5.3
  [c27321d9] Glob v1.3.1
  [a2bd30eb] Graphics v1.1.2
  [86223c79] Graphs v1.9.0
  [42e2da0e] Grisu v1.0.2
  [0b43b601] Groebner v0.5.1
⌅ [d5909c97] GroupsCore v0.4.2
  [cd3eb016] HTTP v1.10.1
  [9fb69e20] Hiccup v0.2.2
  [eafb193a] Highlights v0.5.2
  [3e5b6fbb] HostCPUFeatures v0.1.16
  [34004b35] HypergeometricFunctions v0.3.23
  [7869d1d1] IRTools v0.4.11
  [615f187c] IfElse v0.1.1
⌅ [a09fc81d] ImageCore v0.8.22
  [5903a43b] Infiltrator v1.6.4
  [d25df0c9] Inflate v0.1.4
  [22cec73e] InitialValues v0.3.1
  [842dd82b] InlineStrings v1.4.0
  [505f98c9] InplaceOps v0.3.0
  [18e54dd8] IntegerMathUtils v0.1.2
⌅ [a98d9a8b] Interpolations v0.14.7
  [8197267c] IntervalSets v0.7.8
  [3587e190] InverseFunctions v0.1.12
  [41ab1584] InvertedIndices v1.3.0
  [92d709cd] IrrationalConstants v0.2.2
  [c8e1da08] IterTools v1.9.0
  [82899510] IteratorInterfaceExtensions v1.0.0
  [1019f520] JLFzf v0.1.7
  [692b3bcd] JLLWrappers v1.5.0
  [97c1335a] JSExpr v0.5.4
  [682c06a0] JSON v0.21.4
  [98e50ef6] JuliaFormatter v1.0.45
  [aa1ae85d] JuliaInterpreter v0.9.27
  [ccbc3e58] JumpProcesses v9.10.1
  [ef3ab10e] KLU v0.4.1
  [63c18a36] KernelAbstractions v0.9.15
  [5ab0869b] KernelDensity v0.6.8
  [ba0b0d4f] Krylov v0.9.5
  [929cbde3] LLVM v6.4.1
  [8b046642] LLVMLoopInfo v1.0.0
  [8ac3fa9e] LRUCache v1.6.0
  [b964fa9f] LaTeXStrings v1.3.1
  [2ee39098] LabelledArrays v1.15.0
  [984bce1d] LambertW v0.4.6
  [23fbe1c1] Latexify v0.16.1
  [10f19ff3] LayoutPointers v0.1.15
  [50d2b5c4] Lazy v0.15.1
  [1d6d02ad] LeftChildRightSiblingTrees v0.2.0
  [2d8b4e74] LevyArea v1.0.0
  [6f1fad26] Libtask v0.8.6
  [d3d80556] LineSearches v7.2.0
⌃ [7ed4a6bd] LinearSolve v2.10.0
  [6fdf6af0] LogDensityProblems v2.1.1
  [996a588d] LogDensityProblemsAD v1.7.0
  [2ab3a3ac] LogExpFunctions v0.3.26
  [e6f89c97] LoggingExtras v1.0.3
  [bdcacae8] LoopVectorization v0.12.166
  [2fda8390] LsqFit v0.15.0
  [b2108857] Lux v0.5.13
  [d0bbae9a] LuxCUDA v0.3.1
  [bb33d45b] LuxCore v0.1.6
  [34f89e08] LuxDeviceUtils v0.1.11
  [82251201] LuxLib v0.3.8
  [c7f686f2] MCMCChains v6.0.4
  [be115224] MCMCDiagnosticTools v0.3.8
  [e80e1ace] MLJModelInterface v1.9.4
  [d8e11817] MLStyle v0.4.17
  [1914dd2f] MacroTools v0.5.12
  [d125e4d3] ManualMemory v0.1.8
  [dbb5928d] MappedArrays v0.4.2
  [299715c1] MarchingCubes v0.1.9
  [739be429] MbedTLS v1.1.9
  [442fdcdd] Measures v0.3.2
  [94925ecb] MethodOfLines v0.10.4
  [128add7d] MicroCollections v0.1.4
  [e1d29d7a] Missings v1.1.0
⌃ [961ee093] ModelingToolkit v8.70.0
  [e94cdb99] MosaicViews v0.3.4
  [46d2c3a1] MuladdMacro v0.2.4
  [102ac46a] MultivariatePolynomials v0.5.3
  [6f286f6a] MultivariateStats v0.10.2
  [ffc61752] Mustache v1.0.19
  [d8a4904e] MutableArithmetics v1.4.0
  [a975b10e] Mux v1.0.1
  [d41bc354] NLSolversBase v7.8.3
  [2774e3e8] NLsolve v4.5.1
  [872c559c] NNlib v0.9.9
  [5da4648a] NVTX v0.3.3
  [77ba4419] NaNMath v1.0.2
  [86f7a689] NamedArrays v0.10.0
  [c020b1a1] NaturalSort v1.0.0
  [b8a86587] NearestNeighbors v0.4.16
⌅ [8913a72c] NonlinearSolve v1.10.1
  [d8793406] ObjectFile v0.4.1
  [510215fc] Observables v0.5.5
  [6fe1bfb0] OffsetArrays v1.13.0
  [a15396b6] OnlineStats v1.6.3
  [925886fa] OnlineStatsBase v1.6.1
  [4d8831e6] OpenSSL v1.4.1
  [429524aa] Optim v1.7.8
  [3bd65402] Optimisers v0.3.1
⌃ [7f7a1694] Optimization v3.19.3
  [36348300] OptimizationOptimJL v0.1.14
  [42dfb2eb] OptimizationOptimisers v0.1.6
  [bac558e1] OrderedCollections v1.6.3
⌃ [1dea7af3] OrdinaryDiffEq v6.58.2
  [a7812802] PDEBase v0.1.8
  [90014a1f] PDMats v0.11.31
  [65ce6f38] PackageExtensionCompat v1.0.2
  [5432bcbf] PaddedViews v0.5.12
  [d96e819e] Parameters v0.12.3
  [69de0a69] Parsers v2.8.1
  [570af359] PartialFunctions v1.2.0
  [fa939f87] Pidfile v1.3.0
  [b98c9c47] Pipe v1.3.0
  [ccf2f8ad] PlotThemes v3.1.0
⌃ [995b91a9] PlotUtils v1.3.5
  [a03496cd] PlotlyBase v0.8.19
  [f0f68f2c] PlotlyJS v0.18.11
  [91a5bcdd] Plots v1.39.0
  [e409e4f3] PoissonRandom v0.4.4
  [f517fe37] Polyester v0.7.9
  [1d0040c9] PolyesterWeave v0.2.1
  [2dfb63ee] PooledArrays v1.4.3
  [85a6dd25] PositiveFactorizations v0.2.4
  [d236fae5] PreallocationTools v0.4.13
  [aea7be01] PrecompileTools v1.2.0
  [21216c6a] Preferences v1.4.1
  [08abe8d2] PrettyTables v2.3.1
  [27ebfcd6] Primes v0.5.5
  [33c8b6b6] ProgressLogging v0.1.4
  [92933f4c] ProgressMeter v1.9.0
  [3349acd9] ProtoBuf v1.0.14
  [1fd47b50] QuadGK v2.9.1
  [74087812] Random123 v1.6.2
  [fb686558] RandomExtensions v0.4.4
  [e6cf234a] RandomNumbers v1.5.3
  [b3c3ace0] RangeArrays v0.3.2
  [c84ed2f1] Ratios v0.4.5
  [c1ae055f] RealDot v0.1.0
  [3cdcf5f2] RecipesBase v1.3.4
  [01d81517] RecipesPipeline v0.6.12
⌅ [731186ca] RecursiveArrayTools v2.38.10
  [f2c3362d] RecursiveFactorization v0.2.21
  [189a3867] Reexport v1.2.2
  [05181044] RelocatableFolders v1.0.1
  [ae029012] Requires v1.3.0
  [ae5879a3] ResettableStacks v1.1.1
  [37e2e3b7] ReverseDiff v1.15.1
  [79098fc4] Rmath v0.7.1
  [f2b01f46] Roots v2.0.22
  [7e49a35a] RuntimeGeneratedFunctions v0.5.12
  [fdea26ae] SIMD v3.4.6
  [94e857df] SIMDTypes v0.1.0
  [476501e8] SLEEFPirates v0.6.42
⌅ [0bca4576] SciMLBase v1.98.1
  [e9a6253c] SciMLNLSolve v0.1.9
  [c0aeaf25] SciMLOperators v0.3.7
  [1ed8b502] SciMLSensitivity v7.51.0
  [30f210dd] ScientificTypesBase v3.0.0
  [6c6a2e73] Scratch v1.2.1
  [91c51154] SentinelArrays v1.4.1
  [efcf1570] Setfield v1.1.1
  [992d4aef] Showoff v1.0.3
  [6303bc30] Signals v1.2.0
  [777ac1f9] SimpleBufferStream v1.1.0
  [05bca326] SimpleDiffEq v1.11.0
⌅ [727e6d20] SimpleNonlinearSolve v0.1.23
  [699a6c99] SimpleTraits v0.9.4
  [ce78b400] SimpleUnPack v1.1.0
⌃ [a2af1166] SortingAlgorithms v1.2.0
  [47a9eef4] SparseDiffTools v2.15.0
  [dc90abb0] SparseInverseSubset v0.1.2
  [e56a9233] Sparspak v0.3.9
  [276daf66] SpecialFunctions v2.3.1
  [171d559e] SplittablesBase v0.1.15
  [cae243ae] StackViews v0.1.1
  [aedffcd0] Static v0.8.8
  [0d7ed370] StaticArrayInterface v1.5.0
  [90137ffa] StaticArrays v1.8.1
  [1e83bf80] StaticArraysCore v1.4.2
  [64bff920] StatisticalTraits v3.2.0
  [82ae8749] StatsAPI v1.7.0
⌅ [2913bbd2] StatsBase v0.33.21
  [4c63d2b9] StatsFuns v1.3.0
  [f3b207a7] StatsPlots v0.15.6
⌅ [9672c7b4] SteadyStateDiffEq v1.16.1
⌃ [789caeaf] StochasticDiffEq v6.62.0
  [7792a7ef] StrideArraysCore v0.5.2
  [892a3eda] StringManipulation v0.3.4
  [09ab397b] StructArrays v0.6.16
  [53d494c1] StructIO v0.3.0
⌃ [c3572dad] Sundials v4.20.1
⌅ [2efcf032] SymbolicIndexingInterface v0.2.2
⌃ [d1185830] SymbolicUtils v1.4.0
⌃ [0c5d862f] Symbolics v5.11.0
  [ab02a1b2] TableOperations v1.2.0
  [3783bdb8] TableTraits v1.0.1
  [bd369af6] Tables v1.11.1
  [899adc3e] TensorBoardLogger v0.1.23
  [62fd8b95] TensorCore v0.1.1
  [8ea1fca8] TermInterface v0.3.3
  [5d786b92] TerminalLoggers v0.1.7
  [8290d209] ThreadingUtilities v0.5.2
  [a759f4b9] TimerOutputs v0.5.23
  [0796e94c] Tokenize v0.5.26
  [9f7883ad] Tracker v0.2.30
⌅ [3bb67fe8] TranscodingStreams v0.9.13
  [28d57a85] Transducers v0.4.79
  [d5829a12] TriangularSolve v0.1.20
  [410a4b4d] Tricks v0.1.8
  [781d530d] TruncatedStacktraces v1.4.0
⌃ [fce5fe82] Turing v0.29.3
  [ea0860ee] TuringCallbacks v0.4.0
  [5c2747f8] URIs v1.5.1
  [3a884ed6] UnPack v1.0.2
  [1cfade01] UnicodeFun v0.4.1
⌃ [b8865327] UnicodePlots v3.6.0
  [1986cc42] Unitful v1.19.0
  [45397f5d] UnitfulLatexify v1.6.3
  [a7c27f48] Unityper v0.1.6
  [013be700] UnsafeAtomics v0.2.1
  [d80eeb9a] UnsafeAtomicsLLVM v0.1.3
  [41fe7b60] Unzip v0.2.0
  [3d5dd08c] VectorizationBase v0.21.65
  [19fa3120] VertexSafeGraphs v0.2.0
  [ea10d353] WeakRefStrings v1.4.2
  [0f1e0344] WebIO v0.8.21
  [104b5d7c] WebSockets v1.6.0
  [d49dbf32] WeightInitializers v0.1.3
  [cc8bc4a8] Widgets v0.6.6
⌅ [efce3f68] WoodburyMatrices v0.5.6
  [76eceee3] WorkerUtilities v1.6.1
  [e88e6eb3] Zygote v0.6.68
  [700de1a5] ZygoteRules v0.2.4
  [02a925ec] cuDNN v1.2.1
⌅ [68821587] Arpack_jll v3.5.1+1
  [6e34b625] Bzip2_jll v1.0.8+0
  [4ee394cb] CUDA_Driver_jll v0.7.0+0
⌅ [76a88914] CUDA_Runtime_jll v0.10.1+0
  [62b44479] CUDNN_jll v8.9.4+0
  [83423d85] Cairo_jll v1.16.1+1
⌅ [7cc45869] Enzyme_jll v0.0.96+0
  [2702e6a9] EpollShim_jll v0.0.20230411+0
  [2e619515] Expat_jll v2.5.0+0
  [b22a6f82] FFMPEG_jll v4.4.4+1
  [f5851436] FFTW_jll v3.3.10+0
  [a3f928ae] Fontconfig_jll v2.13.93+0
  [d7e528f0] FreeType2_jll v2.13.1+0
  [559328eb] FriBidi_jll v1.0.10+0
  [0656b61e] GLFW_jll v3.3.9+0
⌅ [d2c73de3] GR_jll v0.72.10+0
  [78b55507] Gettext_jll v0.21.0+0
  [7746bdde] Glib_jll v2.76.5+0
  [3b182d85] Graphite2_jll v1.3.14+0
  [2e76f6c2] HarfBuzz_jll v2.8.1+1
  [1d5cc7b8] IntelOpenMP_jll v2024.0.2+0
  [aacddb02] JpegTurbo_jll v3.0.1+0
  [9c1d0b0a] JuliaNVTXCallbacks_jll v0.2.1+0
  [f7e6163d] Kaleido_jll v0.2.1+0
  [c1c5ebd0] LAME_jll v3.100.1+0
  [88015f11] LERC_jll v3.0.0+1
  [dad2f222] LLVMExtra_jll v0.0.27+1
  [1d63c593] LLVMOpenMP_jll v15.0.7+0
  [dd4b983a] LZO_jll v2.10.1+0
⌅ [e9f186c6] Libffi_jll v3.2.2+1
  [d4300ac3] Libgcrypt_jll v1.8.7+0
  [7e76a0d4] Libglvnd_jll v1.6.0+0
  [7add5ba3] Libgpg_error_jll v1.42.0+0
  [94ce4f54] Libiconv_jll v1.17.0+0
  [4b2f31a3] Libmount_jll v2.35.0+0
⌅ [89763e89] Libtiff_jll v4.5.1+1
  [38a345b3] Libuuid_jll v2.36.0+0
  [856f044c] MKL_jll v2024.0.0+0
  [e98f9f5b] NVTX_jll v3.1.0+2
  [e7412a2a] Ogg_jll v1.3.5+1
  [458c3c95] OpenSSL_jll v3.0.12+0
  [efe28fd5] OpenSpecFun_jll v0.5.5+0
  [91d4177d] Opus_jll v1.3.2+0
  [30392449] Pixman_jll v0.42.2+0
  [c0090381] Qt6Base_jll v6.5.3+1
  [f50d1b31] Rmath_jll v0.4.0+0
⌅ [fb77eaff] Sundials_jll v5.2.1+0
  [a44049a8] Vulkan_Loader_jll v1.3.243+0
  [a2964d1f] Wayland_jll v1.21.0+1
  [2381bf8a] Wayland_protocols_jll v1.25.0+0
  [02c8fc9c] XML2_jll v2.12.2+0
  [aed1982a] XSLT_jll v1.1.34+0
  [ffd25f8a] XZ_jll v5.4.5+0
  [f67eecfb] Xorg_libICE_jll v1.0.10+1
  [c834827a] Xorg_libSM_jll v1.2.3+0
  [4f6342f7] Xorg_libX11_jll v1.8.6+0
  [0c0b7dd1] Xorg_libXau_jll v1.0.11+0
  [935fb764] Xorg_libXcursor_jll v1.2.0+4
  [a3789734] Xorg_libXdmcp_jll v1.1.4+0
  [1082639a] Xorg_libXext_jll v1.3.4+4
  [d091e8ba] Xorg_libXfixes_jll v5.0.3+4
  [a51aa0fd] Xorg_libXi_jll v1.7.10+4
  [d1454406] Xorg_libXinerama_jll v1.1.4+4
  [ec84b674] Xorg_libXrandr_jll v1.5.2+4
  [ea2f1a96] Xorg_libXrender_jll v0.9.10+4
  [14d82f49] Xorg_libpthread_stubs_jll v0.1.1+0
  [c7cfdc94] Xorg_libxcb_jll v1.15.0+0
  [cc61e674] Xorg_libxkbfile_jll v1.1.2+0
  [e920d4aa] Xorg_xcb_util_cursor_jll v0.1.4+0
  [12413925] Xorg_xcb_util_image_jll v0.4.0+1
  [2def613f] Xorg_xcb_util_jll v0.4.0+1
  [975044d2] Xorg_xcb_util_keysyms_jll v0.4.0+1
  [0d47668e] Xorg_xcb_util_renderutil_jll v0.3.9+1
  [c22f9ab0] Xorg_xcb_util_wm_jll v0.4.1+1
  [35661453] Xorg_xkbcomp_jll v1.4.6+0
  [33bec58e] Xorg_xkeyboard_config_jll v2.39.0+0
  [c5fb5394] Xorg_xtrans_jll v1.5.0+0
  [3161d3a3] Zstd_jll v1.5.5+0
  [35ca27e7] eudev_jll v3.2.9+0
  [214eeab7] fzf_jll v0.43.0+0
  [1a1c6b14] gperf_jll v3.1.1+0
  [a4ae2306] libaom_jll v3.4.0+0
  [0ac62f75] libass_jll v0.15.1+0
  [2db6ffa8] libevdev_jll v1.11.0+0
  [f638f0a6] libfdk_aac_jll v2.0.2+0
  [36db933b] libinput_jll v1.18.0+0
  [b53b4c65] libpng_jll v1.6.40+0
  [f27f6e37] libvorbis_jll v1.3.7+1
  [009596ad] mtdev_jll v1.1.6+0
  [1270edf5] x264_jll v2021.5.5+0
  [dfaa095f] x265_jll v3.5.0+0
  [d8fb68d0] xkbcommon_jll v1.4.1+1
  [0dad84c5] ArgTools v1.1.1
  [56f22d72] Artifacts
  [2a0f44e3] Base64
  [8bf52ea8] CRC32c
  [ade2ca70] Dates
  [8ba89e20] Distributed
  [f43a241f] Downloads v1.6.0
  [7b1f6079] FileWatching
  [9fa8497b] Future
  [b77e0a4c] InteractiveUtils
  [4af54fe1] LazyArtifacts
  [b27032c2] LibCURL v0.6.3
  [76f85450] LibGit2
  [8f399da3] Libdl
  [37e2e46d] LinearAlgebra
  [56ddb016] Logging
  [d6f4376e] Markdown
  [a63ad114] Mmap
  [ca575930] NetworkOptions v1.2.0
  [44cfe95a] Pkg v1.9.2
  [de0858da] Printf
  [3fa0cd96] REPL
  [9a3f8284] Random
  [ea8e919c] SHA v0.7.0
  [9e88b42a] Serialization
  [1a1011a3] SharedArrays
  [6462fe0b] Sockets
  [2f01184e] SparseArrays
  [10745b16] Statistics v1.9.0
  [4607b0f0] SuiteSparse
  [fa267f1f] TOML v1.0.3
  [a4e569a6] Tar v1.10.0
  [8dfed614] Test
  [cf7118a7] UUIDs
  [4ec0a83e] Unicode
  [e66e0078] CompilerSupportLibraries_jll v1.0.5+0
  [deac9b47] LibCURL_jll v7.84.0+0
  [29816b5a] LibSSH2_jll v1.10.2+0
  [c8ffd9c3] MbedTLS_jll v2.28.2+0
  [14a3606d] MozillaCACerts_jll v2022.10.11
  [4536629a] OpenBLAS_jll v0.3.21+4
  [05823500] OpenLibm_jll v0.8.1+0
  [efcefdf7] PCRE2_jll v10.42.0+0
  [bea87d4a] SuiteSparse_jll v5.10.1+6
  [83775a58] Zlib_jll v1.2.13+0
  [8e850b90] libblastrampoline_jll v5.8.0+0
  [8e850ede] nghttp2_jll v1.48.0+0
  [3f19e933] p7zip_jll v17.4.0+0
Info Packages marked with ⌃ and ⌅ have new versions available, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`
  • Output of versioninfo()
Julia Version 1.9.3
Commit bed2cd540a (2023-08-24 14:43 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 8 × Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, skylake)
  Threads: 8 on 8 virtual cores
Environment:
  JULIA_EDITOR = code
  JULIA_NUM_THREADS = 8

Additional context

In a discussion with @ChrisRackauckas, it was suggested that the issue might be related to auto-promotion not working as expected in the NeuralODE context with Turing.jl. The workaround eltype(D_z).(nn_param.u) was suggested and worked, implying that the auto-promotion should have handled this case.

@canbozdogan canbozdogan added the bug Something isn't working label Dec 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant