Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Linear Solver Choice #83

Closed
ChrisRackauckas opened this issue Jan 23, 2017 · 11 comments
Closed

Allow Linear Solver Choice #83

ChrisRackauckas opened this issue Jan 23, 2017 · 11 comments

Comments

@ChrisRackauckas
Copy link
Contributor

ChrisRackauckas commented Jan 23, 2017

As discussed here:

https://discourse.julialang.org/t/unified-interface-for-linear-solving/699/14

using factorization objects lets the user choose the linear solving technique, which can greatly affect the performance of the method. If the interface exposes to allow someone to pass a factorization object, say lufact!, then they could also use PETSc.jl and other package's solvers with the problem.

The implementation shouldn't be too difficult. Internally, you'd change:

p = fjac\fvec

to

facotized_fjac = factorization(fjac)
p = facotized_fjac\fvec

and give a default. If you make factorization=factorize as the default, it should work the same as before (there's type-stability issues by doing as a kwarg though, so some care would need to be had).

@sglyon
Copy link
Collaborator

sglyon commented Jan 24, 2017

Sounds good. I don't have time to implement myself right now, but if you are up for it a PR would be great.

@ChrisRackauckas
Copy link
Contributor Author

But what's a good type-stable way to pass it?

@sglyon
Copy link
Collaborator

sglyon commented Jan 24, 2017

Part of the design puzzle 😉

@KristofferC
Copy link
Collaborator

Why would it matter? There is a function barrier before before anything gets done anyway. Could just make a keyword argument linearsolve such that linearsolve(A, b) -> x that defaults to \ or something. Type stability is important but it is also important to know when i tmatters.

@ChrisRackauckas
Copy link
Contributor Author

ChrisRackauckas commented Jan 24, 2017

I didn't know there was a function barrier. I am not familiar with all of the internals, which is why I was asking. But that solves the problem just fine.

I'll make a PR which names it factorization to pass a function for the factorization type, because that's what already works (it should already work with a few packages, and will work with more soon. No reason to special case NLsolve, and maybe there's a way to use the factorization caching anyways)

@KristofferC
Copy link
Collaborator

I meant that keyword arguments in general are behind function barriers. In other words, usage of a keyword argument inside functions is not slow, what is slow is repeatedly calling a keyword argument function. However, it is not slow compared to the time it takes to solve a nonlinear system.

@ChrisRackauckas
Copy link
Contributor Author

ChrisRackauckas commented Jan 24, 2017

Yes, now that I see how you've implemented this

https://github.com/EconForge/NLsolve.jl/blob/master/src/newton.jl#L29

that's clearly the case, so a keyword arg is fine. I didn't know if it would get passed down as a keyword arg, and have that factorization function be called in the loop that \'s. Then there would definitely be a performance hit when solving small systems. But with the way things are setup it's fine.

I meant that keyword arguments in general are behind function barriers.

And that's not true. For this package, that seems to be the case.

@KristofferC
Copy link
Collaborator

I was talking in general:

julia> g(x; k = 1) = k + k
g (generic function with 1 method)

julia> @code_lowered g(1; k = 2)
CodeInfo(:(begin 
        k = 1
        SSAValue(0) = (Main.colon)(1,(Base.length)(#temp#@_2) >> 1)
        #temp#@_8 = (Base.start)(SSAValue(0))
        4: 
        unless !((Base.done)(SSAValue(0),#temp#@_8)) goto 18
        SSAValue(1) = (Base.next)(SSAValue(0),#temp#@_8)
        #temp#@_5 = (Core.getfield)(SSAValue(1),1)
        #temp#@_8 = (Core.getfield)(SSAValue(1),2)
        #temp#@_6 = #temp#@_5 * 2 - 1
        #temp#@_7 = (Core.arrayref)(#temp#@_2,#temp#@_6)
        unless #temp#@_7 === :k goto 14
        k = (Core.arrayref)(#temp#@_2,#temp#@_6 + 1)
        goto 16
        14: 
        (Base.kwerr)(#temp#@_2,,x)
        16: 
        goto 4
        18: 
        return (Main.#g#2)(k,,x)
    end))

The last return is the function barrier.

@KristofferC
Copy link
Collaborator

As a concrete example: these have the same performance

julia> function g(k, n) 
           s = zero(k)
           for i in 1:n
               s+= rand() + k
           end
           return s
       end
g (generic function with 2 methods)

julia> function f(;k=1, n=10^7) 
           s = zero(k)
           for i in 1:n
               s+= rand() + k
           end
           return s
       end

julia> @time f(;k = 1.0, n = 10^7)
  0.019976 seconds (7 allocations: 304 bytes)
1.4998998023834309e7

julia> @time g(1.0, 10^7)
  0.018618 seconds (6 allocations: 192 bytes)
1.5001034781598404e7

As long as you do enough work in the function, the keyword argument dispatch is not a problem. It is calling a keyword function in a hot loop for example that is slow.

@ChrisRackauckas
Copy link
Contributor Author

I see. For reference, I am also including a benchmark for the version with a function. I thought boxing/unboxing the function would cause more issues:

function g(k, n)
           s = zero(k)
           for i in 1:n
               s+= rand() + k
           end
           return s
end

function f(;k=1, n=10^7,rand_func=randn)
           s = zero(k)
           for i in 1:n
               s+= rand_func() + k
           end
           return s
end

using BenchmarkTools
@benchmark f(;k = 1.0, n = 10^7, rand_func = rand())
@benchmark g(1.0, 10^7)

BenchmarkTools.Trial: 
  memory estimate:  176.00 bytes
  allocs estimate:  4
  --------------
  minimum time:     25.675 ms (0.00% GC)
  median time:      25.815 ms (0.00% GC)
  mean time:        25.818 ms (0.00% GC)
  maximum time:     26.000 ms (0.00% GC)
  --------------
  samples:          194
  evals/sample:     1
  time tolerance:   5.00%
  memory tolerance: 1.00%

BenchmarkTools.Trial: 
  memory estimate:  0.00 bytes
  allocs estimate:  0
  --------------
  minimum time:     25.682 ms (0.00% GC)
  median time:      25.881 ms (0.00% GC)
  mean time:        25.884 ms (0.00% GC)
  maximum time:     26.583 ms (0.00% GC)
  --------------
  samples:          194
  evals/sample:     1
  time tolerance:   5.00%
  memory tolerance: 1.00%

But that's not the case. Okay, I'm convinced.

@ChrisRackauckas ChrisRackauckas changed the title Allow passing a factorization object Allow Linear Solver Choice Aug 1, 2017
@ChrisRackauckas
Copy link
Contributor Author

@KristofferC came up with this one in a chat awhile back. I liked it and make it the design for DiffEq. I think it could be useful here.

http://docs.juliadiffeq.org/latest/features/linear_nonlinear.html#Linear-Solvers:-linsolve-Specification-1

Essentially it's just that the user has to give a function

linsolve!(x,A,b,matrix_updated=false)

Using call-overloading of a type and other jazz, a user can make that do whatever they need.

@pkofod

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants