You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 22, 2025. It is now read-only.
Is there a chance to expand the code base to support matrix multiplications with the Jacbian?
Here is an MWE that shows what I am trying to accomplish based on the example from the documentation
using SparseDiffTools
using ForwardDiff
using BenchmarkTools
fcalls =0functiong(x)
global fcalls +=1
y =zero(x)
for i in2:length(x)-1
y[i] = x[i-1] -2x[i] + x[i+1]
end
y[1] =-2x[1] + x[2]
y[end] = x[end-1] -2x[end]
y
end
x =rand(30)
J =JacVec(g, x)
V =rand(30, 50)
@benchmarkfor i =1:size(V, 2)
J*V[:, i]
end@benchmark ForwardDiff.jacobian(g, x)*V
The current solution is to loop over all column vectors which makes this solution slower than computing the Jacobian than with for instance ForwardDiff.jl and then multiply instead of this Jacobian free solution (if the number of dimensions is small enough as in the example). If anyone has a faster solution based on the existing code-base I'd be very grateful.