-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Link to DifferentialEquations.jl? #12
Comments
Thanks for the great suggestion @ChrisRackauckas It would be nice to have an option to formulate the problem as a simulation-based problem using adjoint evaluation capability. And I think DifferentialEquations.jl would be the most convenient package for doing this 🙂 We can't promise anything at this point, but this seems like a great long-term plan. Do you have an example code of hooking up DifferentialEquations.jl with NLP solvers via JuMP? |
https://diffeq.sciml.ai/stable/analysis/parameter_estimation/#Using-JuMP-with-DiffEqParamEstim is an example, but that will use forward-mode AD so it won't scale to huge problems as well. Mixing it with https://diffeq.sciml.ai/stable/analysis/sensitivity/#solve-Differentiation-Examples for the gradient definition is what you'd want. |
Thanks a lot for your interest, @ChrisRackauckas. I think your suggestion on Twitter to use a two step approach could make SBML2Julia much more powerful. |
I am just trying to optimize a simple ODE system specified in MTK with JuMP: using ModelingToolkit
using OrdinaryDiffEq, DiffEqParamEstim, JuMP, NLopt
t_max = 10
dt = 0.1
t_out = 0:dt:t_max
tspan = (0,t_max)
@ModelingToolkit.parameters t, k1, k2
@ModelingToolkit.variables A(t) B(t)
@ModelingToolkit.derivatives D'~t
parameters = [k1 => 0.8, k2 => 0.6]
initial_conditions = [A => 1,
B => 0]
eqs = [D(A) ~ -k1*A + k2*B,
D(B) ~ k1*A - k2*B]
sys = ODESystem(eqs)
model_ode(p_) = OrdinaryDiffEq.ODEProblem(sys,initial_conditions,tspan,p_,jac=true)
solve_model(mp_) = OrdinaryDiffEq.solve(model_ode(mp_), Tsit5(),saveat=dt)
mock_data = Array(solve_model([0.8, 0.6]))
loss_objective(mp_, dat) = build_loss_objective(model_ode(mp_), Tsit5(), L2Loss(t_out,dat))
juobj(args...) = loss_objective(args, mock_data)(args)
jumodel = Model()
JuMP.register(jumodel, :juobj, 2, juobj, autodiff=true)
@JuMP.variable(jumodel, k1, start=1)
@JuMP.variable(jumodel, k2, start=1)
@JuMP.NLobjective(jumodel, Min, juobj(k1, k2))
set_optimizer(jumodel, NLopt.Optimizer)
set_optimizer_attribute(jumodel, "algorithm", :LD_MMA)
JuMP.optimize!(jumodel) Unfortunately, I get the following error: |
JuMP turns parameters into tuples, so you probably don't want to use JuMP if the number of parameters gets large. That said, here's a working version of that code: using ModelingToolkit
using OrdinaryDiffEq, DiffEqParamEstim, JuMP, NLopt
t_max = 10.0
dt = 0.1
t_out = 0:dt:t_max
tspan = (0,t_max)
@ModelingToolkit.parameters t, k1, k2
@ModelingToolkit.variables A(t) B(t)
@ModelingToolkit.derivatives D'~t
parameters = [k1 => 0.8, k2 => 0.6]
initial_conditions = [A => 1,
B => 0]
eqs = [D(A) ~ -k1*A + k2*B,
D(B) ~ k1*A - k2*B]
sys = ODESystem(eqs)
model_ode(p_) = OrdinaryDiffEq.ODEProblem(sys,initial_conditions,tspan,p_,jac=true)
solve_model(mp_) = OrdinaryDiffEq.solve(model_ode(mp_), Tsit5(),saveat=dt)
mock_data = Array(solve_model([0.8, 0.6]))
loss_objective(mp_, dat) = build_loss_objective(model_ode(collect(mp_)), Tsit5(), L2Loss(t_out,dat))
juobj(args...) = loss_objective(args, mock_data)(args)
jumodel = Model()
JuMP.register(jumodel, :juobj, 2, juobj, autodiff=true)
@JuMP.variable(jumodel, k1, start=1)
@JuMP.variable(jumodel, k2, start=1)
@JuMP.NLobjective(jumodel, Min, juobj(k1, k2))
set_optimizer(jumodel, NLopt.Optimizer)
set_optimizer_attribute(jumodel, "algorithm", :LD_MMA)
JuMP.optimize!(jumodel) The other change was to make time floating point valued, since integer-valued time usually isn't a great idea. Hopefully that helps! |
Thanks! That helped a lot. |
I cannot find a way to get multiple experimental condition to work. What I tried for instance is: mock_data_c1 = Array(solve_model([0.8, 0.6]))
mock_data_c2 = Array(solve_model([0.8, 0.5]))
loss_objective(mp_, dat) = build_loss_objective(model_ode(collect(mp_)), Tsit5(), L2Loss(t_out,dat))
juobj(args1, args2) = loss_objective(args1, mock_data1)(args1) + loss_objective(args2, mock_data2)(args2)
jumodel = Model()
JuMP.register(jumodel, :juobj, 2, juobj, autodiff=true)
@JuMP.variable(jumodel, k1_1, start=1)
@JuMP.variable(jumodel, k2_1, start=1)
@JuMP.variable(jumodel, k1_2, start=1)
@JuMP.variable(jumodel, k2_2, start=1)
@JuMP.NLobjective(jumodel, Min, juobj((k1_1, k2_1), (k1_2, k2_2))) But I get the following error:
If anyone knows how to solve this, please let me know. |
I would ask on #jump-dev as it's more of a JuMP question. |
Oh, I see. Thanks. I actually wanted to write an objective function that includes multiple experimental conditions and could be fed to any optimizer, not only JuMP. Is there really no way this can be done using |
|
Little update from my side. I tried to do the first step of the 2-step approach and wrote SbmlInterface.jl, which converts SBML models to MTK and simulates them. Thanks a lot for your help on slack, @ChrisRackauckas . Next plan is to interface with optimizers. |
That is fantastic! |
Cool project! I think it would be cool if simulation-based methods could be supported here by hooking into DifferentialEquations.jl and its DiffEqFlux fast adjoint schemes (https://diffeqflux.sciml.ai/dev/). Usually this ends up being a bit faster than discrete-then-optimize techniques (when utilizing automatic differentiation mixed adjoints), so it would be a good technique to mix in.
The text was updated successfully, but these errors were encountered: