Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ExpRK to use calc_J! and new DiffEqOperators #438

Merged
merged 11 commits into from Jul 18, 2018
Merged

Update ExpRK to use calc_J! and new DiffEqOperators #438

merged 11 commits into from Jul 18, 2018

Conversation

MSeeker1340
Copy link
Contributor

The main goal of this PR is to update the ExpRK integrators to use the calc_J! instead of directly calling f.jac. This would allow general sparse and/or lazy types to be used as the jacobian, as indicated here #416.

A secondary goal is to apply the newest changes to DiffEqOperators to the ExpRK integrators. This goes hand in hand with the primary goal as both are related to lazy operator support. One thing in particular I'd like to do is to drop DiffEqOperators in the requirement list, which should make the separation of functionality clearer. This is made possible due to the various new interfaces introduced in the DiffEqOperators update, in particular convert(Number, a) and convert(AbstractArray, A).

The fields used by `jacobian!` are now only referenced in the branch where they are actually used, so they can be absent from the cache.
@coveralls
Copy link

coveralls commented Jul 17, 2018

Coverage Status

Coverage decreased (-6.6%) to 88.156% when pulling 80b3dd3 on MSeeker1340:jacobian into 6e16436 on JuliaDiffEq:master.

@codecov
Copy link

codecov bot commented Jul 18, 2018

Codecov Report

Merging #438 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #438      +/-   ##
==========================================
+ Coverage   94.77%   94.77%   +<.01%     
==========================================
  Files          72       72              
  Lines        8214     8215       +1     
==========================================
+ Hits         7785     7786       +1     
  Misses        429      429
Impacted Files Coverage Δ
src/exponential_utils.jl 95.93% <ø> (ø) ⬆️
src/derivative_utils.jl 93.33% <100%> (+0.22%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6e16436...80b3dd3. Read the comment docs.

@MSeeker1340
Copy link
Contributor Author

MSeeker1340 commented Jul 18, 2018

I have rewritten the test script for Krylov ExpRK to use ODEFunction with DiffEqArrayOperator as its Jacobian, making it double serve as test for the new Jacobian interface in ExpRK. Since caching methods do not support general ODEFunction, I have changed the tests for classical ExpRK to also compare with Tsit5, as is the case for EPIRK. Because of the significant difference in error for the low order and high order methods, I have also splitted the original testset into two.

The construction of prob and prob_ip in the script serves as a good example of how sparse/lazy Jacobian can be used. Basically:

  1. First we write the right hand side and the jacobian update function;

  2. Then, we construct an ODEFunction using the functions in 1. We can either use a plain array jac_prototype (the commented out lines; not that sparse matrix types can also be used here) or use a AbstractDiffEqArrayOperator, for which f.jac is not needed as it is automatically set using update_coefficients! in the constructor. In our example using DiffEqArrayOperator is really not necessary and is only for the sake of testing, but for more complex operators this approach is the recommended one.

  3. Finally, an ODEProblem is constructed using the ODEFunction and passed to solve. Note that neither the constructor for ODEProblem nor solve need to know about the internals of the ODEFunction.

I think this should be a good interface to the end user (for regular ODEFunction at least; we still need some automation done for SplitODEFunction).

@MSeeker1340 MSeeker1340 changed the title WIP: Update ExpRK to use calc_J! and new DiffEqOperators Update ExpRK to use calc_J! and new DiffEqOperators Jul 18, 2018
@MSeeker1340
Copy link
Contributor Author

MSeeker1340 commented Jul 18, 2018

An additional note on the test script: I have added @testset or let blocks to make sure the scripts don't accidentally pollute the global namespace. I'm still not exactly sure how scoping works for @testset though. It seems to create its own local scope, but for some reason the original linear_nonlinear_convergence_tests.jl pollutes the global namespace even when wrapped around @testset. But anyways the current version should be fine and I have verified using varinfo() that none of utility_tests.jl, linear_nonlinear_convergence_tests.jl and linear_nonlinear_krylov_tests.jl now introduce extra bindings.

@ChrisRackauckas
Copy link
Member

Since caching methods do not support general ODEFunction

Why don't they?

Then, we construct an ODEFunction using the functions in 1. We can either use a plain array jac_prototype (the commented out lines; not that sparse matrix types can also be used here) or use a AbstractDiffEqArrayOperator, for which f.jac is not needed as it is automatically set using update_coefficients! in the constructor. In our example using DiffEqArrayOperator is really not necessary and is only for the sake of testing, but for more complex operators this approach is the recommended one.

So there are two ways to do it: specify the jac function and specify the jac_prototype, or build an operator and its update_coefficients! is the Jacobian. Do we really need both?

@MSeeker1340
Copy link
Contributor Author

Why don't they?

The caching method require the linear operator to be constant and as a result should only be used in conjunction with a SplitFunction.

So there are two ways to do it: specify the jac function and specify the jac_prototype, or build an operator and its update_coefficients! is the Jacobian. Do we really need both?

While what can be done for the first option can always be done using the second one, I think keeping the first one has its advantage. It is more lightweight for simple problems, especially out-of-place ones (we only need jac(u,p,t) -> J and no jac_prototype), and does not lock the user into using DiffEqOperators. It also provides an option for the user to override the default update_coefficients! for a custom built operator, should the need arise.

@ChrisRackauckas
Copy link
Member

The caching method require the linear operator to be constant and as a result should only be used in conjunction with a SplitFunction.

Oh yeah :).

While what can be done for the first option can always be done using the second one, I think keeping the first one has its advantage. It is more lightweight for simple problems, especially out-of-place ones (we only need jac(u,p,t) -> J and no jac_prototype), and does not lock the user into using DiffEqOperators. It also provides an option for the user to override the default update_coefficients! for a custom built operator, should the need arise.

Yes, though I am not entirely convinced supporting both methods for Jacobians is useful. But in the end, we need the second to do things matrix-free, but I agree it's overkill for dense Jacobians. Maybe we will have a better way to merge it in the future, but for now let's just keep it.

@ChrisRackauckas
Copy link
Member

LGTM. I would like to see a test with a sparse matrix and a test with a matrix-free operator though.

@MSeeker1340
Copy link
Contributor Author

LGTM. I would like to see a test with a sparse matrix and a test with a matrix-free operator though.

Here is a simple test script. Note that DerivativeOperator is still not updated so I need to do an ad-hoc definition for convert:

using OrdinaryDiffEq, DiffEqOperators, LinearAlgebra, Test, SparseArrays, Random
N = 10
# Sparse Jacobian
srand(0); u0 = normalize(randn(N))
dd = -2 * ones(N); du = ones(N-1)
A = spdiagm(-1 => du, 0 => dd, 1 => du)
f = (u,p,t) -> A*u
fun = ODEFunction(f;
                  jac=(u,p,t) -> A,
                  analytic=(u,p,t) -> exp(t*Matrix(A)) * u)
prob = ODEProblem(fun, u0, (0.0,1.0))
sol = solve(prob, LawsonEuler(krylov=true, m=N); dt=0.1)
err = norm(sol(1.0) - fun.analytic(u0,nothing,1.0))
println("Error for sparse jacobian: $err")
# Matrix-free Jacobian
# Need to implement the missing interface for DerivativeOperator first
Base.convert(::Type{AbstractArray}, L::DerivativeOperator) = sparse(L)
L = DerivativeOperator{Float64}(2,2,1.0,N,:Dirichlet0,:Dirichlet0)
fun = ODEFunction(L;
                  jac_prototype=L,
                  analytic=(u,p,t) -> exp(t*full(L)) * u)
prob = ODEProblem(fun, u0, (0.0,1.0))
sol = solve(prob, LawsonEuler(krylov=true, m=N); dt=0.1)
err = norm(sol(1.0) - fun.analytic(u0,nothing,1.0))
println("Error for matrix-free jacobian: $err")

The output is (minus the depwarns from DerivativeOperator):

Error for sparse jacobian: 5.660198284943883e-16
Error for matrix-free jacobian: 5.660198284943883e-16

By the way, about the Travis build error: I think this is probably due to the Travis build still using DiffEqBase before today's hotfix? The stack trace seems a bit cryptic, but anyway the convergence test runs fine on my machine.

@ChrisRackauckas
Copy link
Member

Add these two tests.

By the way, about the Travis build error: I think this is probably due to the Travis build still using DiffEqBase before today's hotfix? The stack trace seems a bit cryptic, but anyway the convergence test runs fine on my machine.

Just merged.

@ChrisRackauckas ChrisRackauckas merged commit 6bf0221 into SciML:master Jul 18, 2018
@MSeeker1340 MSeeker1340 deleted the jacobian branch July 19, 2018 14:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants