Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

start fixing ambiguities #437

Merged
merged 1 commit into from
Dec 11, 2023
Merged

start fixing ambiguities #437

merged 1 commit into from
Dec 11, 2023

Conversation

ArnoStrouwen
Copy link
Member

Fixes many of the method ambiguities in this package.
Things I'm still unsure about:

First, for defaultalg there is dispatch for AbstractSciMLOperator as well as dispatch for OperatorAssumptions{Nothing}.
AbstractSciMLOperator dispatches based on A.A.
OperatorAssumptions{Nothing} dispatches based on issquare(A).
If both occur together, I don't know which order to apply them.
Currently, it first gets rid of the Nothing, but this means issquare has to be defined for Operators.

Second, there are remaining ambiguities regarding:

function init_cacheval(alg::GenericFactorization,
    A::Union{Hermitian{T, <:SparseMatrixCSC},
        Symmetric{T, <:SparseMatrixCSC}}, b, u, Pl, Pr,
    maxiters::Int, abstol, reltol, verbose::Bool,
    assumptions::OperatorAssumptions) where {T}
    newA = copy(convert(AbstractMatrix, A))
    do_factorization(alg, newA, b, u)
end

e.g. with:

function init_cacheval(alg::GenericFactorization{typeof(svd!)}, A::AbstractMatrix, b, u, Pl, Pr,
    maxiters::Int,
    abstol, reltol, verbose::Bool, assumptions::OperatorAssumptions)
    ArrayInterface.svd_instance(A)
end

I'm not sure how to resolve these, I'm assuming we want as little do_factorization as possible?

Third, for some special matrix types, such as diagonal, there is dispatch to always use the same method, no matter the algorithm. But there are also fallback methods for most algorithms to work for general A. This design choice leads to tons of additional function definitions to resolve ambiguities between the two.

Copy link

codecov bot commented Dec 3, 2023

Codecov Report

Attention: 47 lines in your changes are missing coverage. Please review.

Comparison is base (f87583e) 66.50% compared to head (3099e72) 65.44%.

Files Patch % Lines
src/factorization.jl 12.24% 43 Missing ⚠️
src/default.jl 63.63% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #437      +/-   ##
==========================================
- Coverage   66.50%   65.44%   -1.06%     
==========================================
  Files          27       27              
  Lines        2039     2072      +33     
==========================================
  Hits         1356     1356              
- Misses        683      716      +33     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

end

function init_cacheval(::FastQRFactorization{Val{true}}, A, b, u, Pl, Pr,
function init_cacheval(::FastQRFactorization{Val{true}}, A::AbstractMatrix, b, u, Pl, Pr,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about the operator dispatches?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should still catch them:

for alg in InteractiveUtils.subtypes(AbstractFactorization)
@eval function init_cacheval(alg::$alg, A::MatrixOperator, b, u, Pl, Pr,
maxiters::Int, abstol, reltol, verbose::Bool,
assumptions::OperatorAssumptions)
init_cacheval(alg, A.A, b, u, Pl, Pr,
maxiters::Int, abstol, reltol, verbose::Bool,
assumptions::OperatorAssumptions)
end
end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's only for matrix operator. What about other abstract scimloperators?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is none for that,
but note that this is currently not working anyway:

using LinearSolve
using SciMLOperators

N = 4
f = (u, p, t) -> u .* u

M = MatrixOperator(rand(N, N))
D = DiagonalOperator(rand(N))
F = FunctionOperator(f, zeros(N), zeros(N))

b = rand(4)

solve(LinearProblem(rand(N, N), b),FastQRFactorization()) # works
solve(LinearProblem(M, b),FastQRFactorization()) # fails
solve(LinearProblem(D, b),FastQRFactorization()) # fails
solve(LinearProblem(F, b),FastQRFactorization()) # fails

julia> solve(LinearProblem(M, b),FastQRFactorization())
ERROR: MethodError: init_cacheval(::FastQRFactorization{LinearAlgebra.NoPivot}, ::MatrixOperator{Float64, Matrix{Float64}, SciMLOperators.FilterKwargs{typeof(SciMLOperators.DEFAULT_UPDATE_FUNC), Tuple{}}, SciMLOperators.FilterKwargs{typeof(SciMLOperators.DEFAULT_UPDATE_FUNC), Tuple{}}}, ::Vector{Float64}, ::Vector{Float64}, ::IdentityOperator, ::IdentityOperator, ::Int64, ::Float64, ::Float64, ::Bool, ::OperatorAssumptions{Bool}) is ambiguous.

Candidates:
init_cacheval(alg::FastQRFactorization, A::MatrixOperator, b, u, Pl, Pr, maxiters::Int64, abstol, reltol, verbose::Bool, assumptions::OperatorAssumptions)
@ LinearSolve C:\Users\arno.julia\packages\LinearSolve\8EgR1\src\factorization.jl:1180
init_cacheval(alg::FastQRFactorization{LinearAlgebra.NoPivot}, A, b, u, Pl, Pr, maxiters::Int64, abstol, reltol, verbose::Bool, assumptions::OperatorAssumptions)
@ LinearSolve C:\Users\arno.julia\packages\LinearSolve\8EgR1\src\factorization.jl:1066

Possible fix, define
init_cacheval(::FastQRFactorization{LinearAlgebra.NoPivot}, ::MatrixOperator, ::Any, ::Any, ::Any, ::Any, ::Int64, ::Any, ::Any, ::Bool, ::OperatorAssumptions)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that was working before?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For it to ever have worked for FunctionOperators it needed to have this defined

WorkspaceAndFactors(ws, ArrayInterface.qr_instance(A))

Anyway, the solution for DiagonalOperators seems straightforward to me. Just add the same block of code as for MatrixOperators. But how do you want to handle FunctionOperators?

@ChrisRackauckas
Copy link
Member

Resolve the conflicts?

@ChrisRackauckas
Copy link
Member

@avik-pal can you review this one?

@avik-pal
Copy link
Member

When can issquare be nothing? If we can't resolve the dimensions can't we assume rectangular

@avik-pal
Copy link
Member

Oh I see it is to disambiguate between no assumption provided from the user. This PR seems fine in that case

@ChrisRackauckas ChrisRackauckas merged commit bda160c into SciML:main Dec 11, 2023
10 of 14 checks passed
@ArnoStrouwen ArnoStrouwen deleted the amb branch May 13, 2024 08:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants