Skip to content

Conversation

@amontoison
Copy link
Member

No description provided.

@amontoison amontoison changed the title Add a kwarg storatetype for PreallocatedLinearOperators Add a kwarg storagetype for PreallocatedLinearOperators Mar 24, 2020
@coveralls
Copy link

coveralls commented Mar 24, 2020

Coverage Status

Coverage remained the same at 96.308% when pulling 0cc7b85 on amontoison:storagetype into 1ebf3d6 on JuliaSmoothOptimizers:master.

@codecov
Copy link

codecov bot commented Mar 24, 2020

Codecov Report

Merging #141 into master will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@          Coverage Diff           @@
##           master    #141   +/-   ##
======================================
  Coverage    96.3%   96.3%           
======================================
  Files          14      14           
  Lines         623     623           
======================================
  Hits          600     600           
  Misses         23      23
Impacted Files Coverage Δ
src/PreallocatedLinearOperators.jl 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1ebf3d6...0cc7b85. Read the comment docs.

@dpo
Copy link
Member

dpo commented Mar 24, 2020

Please give some background and reasoning for this change.

@amontoison
Copy link
Member Author

PreallocatedLinearOperator can be only preallocated with Vector{T}, common CPU storage. But If I want to use it on GPU, I must preallocated with CuVector{T}. This pull request allows that.

@dpo
Copy link
Member

dpo commented Mar 24, 2020

In other constructors, the user must pass in the storage (with the type that they want). The main constructor was intended to do things automatically, but now if the user needs to specify the storage type, it seems to me that it defeats the purpose. Isn't it just as easy to always ask the user to pass in the storage?

Something like (untested):

# main constructor, assume symmetric==false and hermitian==false
function PreallocatedLinearOperator(Mv :: AbstractVector{T},
                                    Mtu :: AbstractVector{T},
                                    Maw :: AbstractVector{T},
                                    M :: AbstractMatrix{T}) where T
  nrow, ncol = size(M)
  @assert length(Mv) == nrow
  @assert length(Mtu) == ncol
  @assert length(Maw) == ncol
  prod = @closure v -> mul!(Mv, M, v)
  tprod = @closure u -> mul!(Mtu, transpose(M), u)
  ctprod = @closure w -> mul!(Maw, adjoint(M), w)
  PreallocatedLinearOperator{T}(nrow, ncol, symmetric, hermitian, prod, tprod, ctprod)
end

# convenience constructors
function PreallocatedLinearOperator(Mv :: AbstractVector{T},
                                    M :: Hermitian{T, <:AbstractMatrix{T}) where T
  PreallocatedLinearOperator(Mv, Mv, Mv, M)
end

function PreallocatedLinearOperator(Mv :: AbstractVector{T},
                                    M :: Symmetric{T, <:AbstractMatrix{T}) where T
  PreallocatedLinearOperator(Mv, Mv, Mv, M)
end

In the main constructor, there's nothing to prevent the user from passing in three times the same array if they know that products with the transposed and adjoint won't be needed. The assertions could be removed if they're annoying.

What's missing in this design?

@amontoison
Copy link
Member Author

I also updated this generic function, Mv, Mtu, MaW were restricted to be Vector{T}.
For the variants, it's not really annoying that we have an extra kwarg for very specific applications?
99% of users won't see the difference because they mainly work on CPU.

@dpo dpo merged commit 560071b into JuliaSmoothOptimizers:master Mar 30, 2020
@dpo
Copy link
Member

dpo commented Mar 30, 2020

Ok, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants