New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow Linear Solver Choice #83
Comments
Sounds good. I don't have time to implement myself right now, but if you are up for it a PR would be great. |
But what's a good type-stable way to pass it? |
Part of the design puzzle 😉 |
Why would it matter? There is a function barrier before before anything gets done anyway. Could just make a keyword argument |
I didn't know there was a function barrier. I am not familiar with all of the internals, which is why I was asking. But that solves the problem just fine. I'll make a PR which names it |
I meant that keyword arguments in general are behind function barriers. In other words, usage of a keyword argument inside functions is not slow, what is slow is repeatedly calling a keyword argument function. However, it is not slow compared to the time it takes to solve a nonlinear system. |
Yes, now that I see how you've implemented this https://github.com/EconForge/NLsolve.jl/blob/master/src/newton.jl#L29 that's clearly the case, so a keyword arg is fine. I didn't know if it would get passed down as a keyword arg, and have that factorization function be called in the loop that
And that's not true. For this package, that seems to be the case. |
I was talking in general: julia> g(x; k = 1) = k + k
g (generic function with 1 method)
julia> @code_lowered g(1; k = 2)
CodeInfo(:(begin
k = 1
SSAValue(0) = (Main.colon)(1,(Base.length)(#temp#@_2) >> 1)
#temp#@_8 = (Base.start)(SSAValue(0))
4:
unless !((Base.done)(SSAValue(0),#temp#@_8)) goto 18
SSAValue(1) = (Base.next)(SSAValue(0),#temp#@_8)
#temp#@_5 = (Core.getfield)(SSAValue(1),1)
#temp#@_8 = (Core.getfield)(SSAValue(1),2)
#temp#@_6 = #temp#@_5 * 2 - 1
#temp#@_7 = (Core.arrayref)(#temp#@_2,#temp#@_6)
unless #temp#@_7 === :k goto 14
k = (Core.arrayref)(#temp#@_2,#temp#@_6 + 1)
goto 16
14:
(Base.kwerr)(#temp#@_2,,x)
16:
goto 4
18:
return (Main.#g#2)(k,,x)
end)) The last return is the function barrier. |
As a concrete example: these have the same performance julia> function g(k, n)
s = zero(k)
for i in 1:n
s+= rand() + k
end
return s
end
g (generic function with 2 methods)
julia> function f(;k=1, n=10^7)
s = zero(k)
for i in 1:n
s+= rand() + k
end
return s
end
julia> @time f(;k = 1.0, n = 10^7)
0.019976 seconds (7 allocations: 304 bytes)
1.4998998023834309e7
julia> @time g(1.0, 10^7)
0.018618 seconds (6 allocations: 192 bytes)
1.5001034781598404e7 As long as you do enough work in the function, the keyword argument dispatch is not a problem. It is calling a keyword function in a hot loop for example that is slow. |
I see. For reference, I am also including a benchmark for the version with a function. I thought boxing/unboxing the function would cause more issues: function g(k, n)
s = zero(k)
for i in 1:n
s+= rand() + k
end
return s
end
function f(;k=1, n=10^7,rand_func=randn)
s = zero(k)
for i in 1:n
s+= rand_func() + k
end
return s
end
using BenchmarkTools
@benchmark f(;k = 1.0, n = 10^7, rand_func = rand())
@benchmark g(1.0, 10^7)
BenchmarkTools.Trial:
memory estimate: 176.00 bytes
allocs estimate: 4
--------------
minimum time: 25.675 ms (0.00% GC)
median time: 25.815 ms (0.00% GC)
mean time: 25.818 ms (0.00% GC)
maximum time: 26.000 ms (0.00% GC)
--------------
samples: 194
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
BenchmarkTools.Trial:
memory estimate: 0.00 bytes
allocs estimate: 0
--------------
minimum time: 25.682 ms (0.00% GC)
median time: 25.881 ms (0.00% GC)
mean time: 25.884 ms (0.00% GC)
maximum time: 26.583 ms (0.00% GC)
--------------
samples: 194
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00% But that's not the case. Okay, I'm convinced. |
@KristofferC came up with this one in a chat awhile back. I liked it and make it the design for DiffEq. I think it could be useful here. Essentially it's just that the user has to give a function linsolve!(x,A,b,matrix_updated=false) Using call-overloading of a type and other jazz, a user can make that do whatever they need. |
As discussed here:
https://discourse.julialang.org/t/unified-interface-for-linear-solving/699/14
using factorization objects lets the user choose the linear solving technique, which can greatly affect the performance of the method. If the interface exposes to allow someone to pass a factorization object, say
lufact!
, then they could also use PETSc.jl and other package's solvers with the problem.The implementation shouldn't be too difficult. Internally, you'd change:
to
and give a default. If you make
factorization=factorize
as the default, it should work the same as before (there's type-stability issues by doing as a kwarg though, so some care would need to be had).The text was updated successfully, but these errors were encountered: