-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simulating a large number of independent jumps #75
Comments
Also if want to check with closures method, here is the code: function test_closures()
p = (μ = 0.01, σ = 0.1, N = 300) # if all constant
T = 10.0 # maximum time length
x_iv = rand(p.N) # just draws from the inital condition
prob = SDEProblem(μ_SDE, σ_SDE, x_iv ,(0.0, T), p)
rate(u,p,t) = 0.2
affect_index!(integrator, index) = (integrator.u[index] = max(integrator.u[index], integrator.u[rand(1:integrator.p.N)]) )
jumps = [ConstantRateJump(rate,AffectIndex(affect_index!, i)) for i in 1:p.N]
jump_prob = JumpProblem(prob,Direct(),jumps...)
@btime solve($jump_prob);
end
test_closures() |
Thanks, I’ll play around with this when I’m back in Boston next week. |
Thanks. We will investigate other approaches for when the jumps are all of identical frequency (as discussed before), so the main purpose for this would be to have a framework to modify for when the jumps have different rates |
But also, we just aren't sure what to consider swapping out for the |
|
OK. So it sounds like the following is true: Direct -> DirectFW if we are wrapping a whole bunch of things of different types, otherwise it shouldn't matter. |
SOSRI is a bit better. |
@jlperla Just an update. I've been playing around with this, and I am seeing that even with Did you or @PooyaFa ever profile the code and see where it is actually spending its time? (i.e. is the major amount of time actually being spent in |
Thanks! I think based on what we discussed with Chris, tau leaping seems like the only way to really jack up the number of points. However, even without the jump size issues, the current regular jump code only does pure jump processes. Alas, we need jump diffusions. |
I don't think tau-leaping is needed, and it can potentially introduce other issues. One should certainly be able to simulate thousands of jump processes coupled to diffusion processes without needing tau-leaping. The problem is determining where the bottleneck in the current code base is. It could be in |
If I've understood your problem / code correctly, there are no "dependencies" between the jumps. That is, when a given jump occurs it does not change Could you try this code out, which uses function test_SRIW1()
p = (μ = 0.01, σ = 0.1, N = 500) # if all constant
T = 10.0 # maximum time length
x_iv = rand(p.N) # just draws from the inital condition
prob = SDEProblem(μ_SDE, σ_SDE, x_iv ,(0.0, T), p)
rate(u,p,t) = 0.2
affect_index!(integrator, index) = (integrator.u[index] = max(integrator.u[index], integrator.u[rand(1:integrator.p.N)]) )
jumps = [ConstantRateJump(rate,AffectIndex(affect_index!, i)) for i in 1:p.N]
jump_prob = JumpProblem(prob,NRM(),JumpSet((), jumps, nothing, nothing),save_positions=(false,false),dep_graph=[Int[] for i=1:p.N])
sol = solve(jump_prob, SRIW1(), saveat=T);
#@btime begin solve($jump_prob, $SRIW1(), saveat=$T); end;
return sol;
end |
Note that it currently only saves at the final time point, but you could add more save points using |
Also, did you ever actually try |
I am going to close this out for now, as I think that the combination of not saving everything and using DirectFW (or NRM) closes this particular issue. |
We want to simulate a large number of independent jumps in a SDE problem. Code is below. When N = 300, it's reasonable, however for N = 1000 (for some cases we might need N = 10000) it's stalling. As discussed in SciML/DiffEqDocs.jl#258 (comment)
SSAStepper()
is no use here, but even if wrongly someone use it, with N = 1000 it's stalling.This type of model we are using is for continuous-time models of an economy with heterogeneous firms.
Here is a sample code:
The text was updated successfully, but these errors were encountered: