Skip to content

Conversation

albertomercurio
Copy link
Contributor

The mul! should not be overlayed for sparse arrays, as they require custom methods.

This PR fixes #1296

Comment on lines 115 to 116
(:AbstractVector, :AbstractMatrix, :AbstractVector),
(:AbstractMatrix, :AbstractMatrix, :AbstractVecOrMat),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make cT and bT Dense as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just limiting the matrix to be dense to make it more general. But I can add the constraint also to the other two arguments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I make them DenseArrays as well?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this prevents dispatches for Triangular matrices and such from going down this route.

@albertomercurio
Copy link
Contributor Author

Most importantly, this PR should wait #1696 to be merged first.

@albertomercurio
Copy link
Contributor Author

I have relaxed again the method back to AbstractMatrix, improving the check condition directly inside the overlayed function.

When using a sparse matrix, it correctly calls Base.inferencebarrier(LinearAlgebra.mul!)(C, A, B, α, β) rather than TracedLinearAlgebra.overloaded_mul!(C2, A2, B2, α, β). However, I get the following error

ERROR: LoadError: "Cannot trace existing trace type"
Stacktrace:
  [1] #make_tracer#142
    @ ~/.julia/dev/Reactant/src/Tracing.jl:1292
  [2] prepare_mlir_fn_args(args::Tuple{Int64, Nothing, KernelAbstractions.Kernel{ReactantKernelAbstractionsExt.ReactantBackend, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicSize, typeof(gpu_spmv_kernel!)}, Reactant.TracedRArray{Float64, 1}, GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, Reactant.TracedRArray{Float64, 1}}, name::String, concretein::Bool, toscalar::Bool, argprefix::Symbol, runtime::Val{:PJRT}, optimize_then_pad::Bool, do_transpose::Bool, input_shardings::Nothing, verify_arg_names::Nothing)
    @ Reactant.TracedUtils ~/.julia/dev/Reactant/src/TracedUtils.jl:453
  [3] make_mlir_fn(f::typeof(ReactantKernelAbstractionsExt.tokw), args::Tuple{Int64, Nothing, KernelAbstractions.Kernel{ReactantKernelAbstractionsExt.ReactantBackend, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicSize, typeof(gpu_spmv_kernel!)}, Reactant.TracedRArray{Float64, 1}, GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, Reactant.TracedRArray{Float64, 1}}, kwargs::@NamedTuple{}, name::String, concretein::Bool; toscalar::Bool, return_dialect::Symbol, args_in_result::Symbol, construct_function_without_args::Bool, do_transpose::Bool, input_shardings::Nothing, output_shardings::Nothing, runtime::Val{:PJRT}, verify_arg_names::Nothing, argprefix::Symbol, resprefix::Symbol, resargprefix::Symbol, num_replicas::Int64, optimize_then_pad::Bool)
    @ Reactant.TracedUtils ~/.julia/dev/Reactant/src/TracedUtils.jl:324
  [4] compile_mlir!(mod::Reactant.MLIR.IR.Module, f::Function, args::Tuple{Int64, Nothing, KernelAbstractions.Kernel{ReactantKernelAbstractionsExt.ReactantBackend, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicSize, typeof(gpu_spmv_kernel!)}, Reactant.TracedRArray{Float64, 1}, GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, Reactant.TracedRArray{Float64, 1}}, compile_options::CompileOptions, callcache::Dict{Vector, @NamedTuple{f_name::String, mlir_result_types::Vector{Reactant.MLIR.IR.Type}, traced_result, mutated_args::Vector{Int64}, linear_results::Vector{Union{ReactantCore.MissingTracedValue, Reactant.TracedRArray, Reactant.TracedRNumber}}, fnwrapped::Bool, argprefix::Symbol, resprefix::Symbol, resargprefix::Symbol}}, sdycache::Dict{Tuple{AbstractVector{Int64}, NTuple{var"#s1742", Symbol} where var"#s1742", NTuple{N, Int64} where N}, @NamedTuple{sym_name::Reactant.MLIR.IR.Attribute, mesh_attr::Reactant.MLIR.IR.Attribute, mesh_op::Reactant.MLIR.IR.Operation, mesh::Reactant.Sharding.Mesh}}; fn_kwargs::@NamedTuple{}, backend::String, runtime::Val{:PJRT}, legalize_stablehlo_to_mhlo::Bool, kwargs::@Kwargs{})
    @ Reactant.Compiler ~/.julia/dev/Reactant/src/Compiler.jl:1603
  [5] compile_mlir! (repeats 2 times)
    @ ~/.julia/dev/Reactant/src/Compiler.jl:1570 [inlined]
  [6] compile_xla(f::Function, args::Tuple{Int64, Nothing, KernelAbstractions.Kernel{ReactantKernelAbstractionsExt.ReactantBackend, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicSize, typeof(gpu_spmv_kernel!)}, Reactant.TracedRArray{Float64, 1}, GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, Reactant.TracedRArray{Float64, 1}}; before_xla_optimizations::Bool, client::Nothing, serializable::Bool, kwargs::@Kwargs{compile_options::CompileOptions, fn_kwargs::@NamedTuple{}})
    @ Reactant.Compiler ~/.julia/dev/Reactant/src/Compiler.jl:3492
  [7] compile_xla
    @ ~/.julia/dev/Reactant/src/Compiler.jl:3465 [inlined]
  [8] compile(f::Function, args::Tuple{Int64, Nothing, KernelAbstractions.Kernel{ReactantKernelAbstractionsExt.ReactantBackend, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicSize, typeof(gpu_spmv_kernel!)}, Reactant.TracedRArray{Float64, 1}, GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, Reactant.TracedRArray{Float64, 1}}; kwargs::@Kwargs{fn_kwargs::@NamedTuple{}, client::Nothing, reshape_propagate::Symbol, raise_first::Bool, assert_nonallocating::Bool, legalize_chlo_to_stablehlo::Bool, transpose_propagate::Symbol, donated_args::Symbol, optimize_then_pad::Bool, cudnn_hlo_optimize::Bool, compile_options::Missing, sync::Bool, no_nan::Bool, raise::Bool, shardy_passes::Symbol, optimize::Bool, optimize_communications::Bool})
    @ Reactant.Compiler ~/.julia/dev/Reactant/src/Compiler.jl:3567
  [9] macro expansion
    @ ~/.julia/dev/Reactant/src/Compiler.jl:2642 [inlined]
 [10] (::KernelAbstractions.Kernel{ReactantKernelAbstractionsExt.ReactantBackend, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicSize, typeof(gpu_spmv_kernel!)})(::Reactant.TracedRArray{Float64, 1}, ::Vararg{Any}; ndrange::Int64, workgroupsize::Nothing)
    @ ReactantKernelAbstractionsExt ~/.julia/dev/Reactant/ext/ReactantKernelAbstractionsExt.jl:107
 [11] Kernel
    @ ~/.julia/dev/Reactant/ext/ReactantKernelAbstractionsExt.jl:103 [inlined]
 [12] spmv!
    @ ~/.julia/dev/Reactant/test_sparse_debug.jl:46 [inlined]
 [13] mul!(y::Reactant.TracedRArray{Float64, 1}, A::GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, x::Reactant.TracedRArray{Float64, 1}, α::Bool, β::Bool)
    @ Main ~/.julia/dev/Reactant/test_sparse_debug.jl:64
 [14] #mul!
    @ ~/.julia/dev/Reactant/src/Overlay.jl:136 [inlined]
 [15] (::Nothing)(none::typeof(mul!), none::Reactant.TracedRArray{Float64, 1}, none::GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, none::Reactant.TracedRArray{Float64, 1}, none::Bool, none::Bool)
    @ Reactant ./<missing>:0
 [16] call_with_reactant(::typeof(mul!), ::Reactant.TracedRArray{Float64, 1}, ::GenericSparseMatrixCSR{Float64, Int64, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Int64, 1}, Reactant.TracedRArray{Float64, 1}}, ::Reactant.TracedRArray{Float64, 1}, ::Bool, ::Bool)
    @ Reactant ~/.julia/dev/Reactant/src/utils.jl:519
 [17] make_mlir_fn(f::typeof(mul!), args::Tuple{ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, GenericSparseMatrixCSR{Float64, Int64, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, Bool, Bool}, kwargs::@NamedTuple{}, name::String, concretein::Bool; toscalar::Bool, return_dialect::Symbol, args_in_result::Symbol, construct_function_without_args::Bool, do_transpose::Bool, input_shardings::Nothing, output_shardings::Nothing, runtime::Val{:PJRT}, verify_arg_names::Nothing, argprefix::Symbol, resprefix::Symbol, resargprefix::Symbol, num_replicas::Int64, optimize_then_pad::Bool)
    @ Reactant.TracedUtils ~/.julia/dev/Reactant/src/TracedUtils.jl:348
 [18] compile_mlir!(mod::Reactant.MLIR.IR.Module, f::Function, args::Tuple{ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, GenericSparseMatrixCSR{Float64, Int64, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, Bool, Bool}, compile_options::CompileOptions, callcache::Dict{Vector, @NamedTuple{f_name::String, mlir_result_types::Vector{Reactant.MLIR.IR.Type}, traced_result, mutated_args::Vector{Int64}, linear_results::Vector{Union{ReactantCore.MissingTracedValue, Reactant.TracedRArray, Reactant.TracedRNumber}}, fnwrapped::Bool, argprefix::Symbol, resprefix::Symbol, resargprefix::Symbol}}, sdycache::Dict{Tuple{AbstractVector{Int64}, NTuple{var"#s1742", Symbol} where var"#s1742", NTuple{N, Int64} where N}, @NamedTuple{sym_name::Reactant.MLIR.IR.Attribute, mesh_attr::Reactant.MLIR.IR.Attribute, mesh_op::Reactant.MLIR.IR.Operation, mesh::Reactant.Sharding.Mesh}}; fn_kwargs::@NamedTuple{}, backend::String, runtime::Val{:PJRT}, legalize_stablehlo_to_mhlo::Bool, kwargs::@Kwargs{})
    @ Reactant.Compiler ~/.julia/dev/Reactant/src/Compiler.jl:1603
 [19] compile_mlir! (repeats 2 times)
    @ ~/.julia/dev/Reactant/src/Compiler.jl:1570 [inlined]
 [20] compile_xla(f::Function, args::Tuple{ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, GenericSparseMatrixCSR{Float64, Int64, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, Bool, Bool}; before_xla_optimizations::Bool, client::Nothing, serializable::Bool, kwargs::@Kwargs{compile_options::CompileOptions, fn_kwargs::@NamedTuple{}})
    @ Reactant.Compiler ~/.julia/dev/Reactant/src/Compiler.jl:3492
 [21] compile_xla
    @ ~/.julia/dev/Reactant/src/Compiler.jl:3465 [inlined]
 [22] compile(f::Function, args::Tuple{ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, GenericSparseMatrixCSR{Float64, Int64, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Int64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}}, ConcretePJRTArray{Float64, 1, 1, Reactant.Sharding.ShardInfo{Reactant.Sharding.NoSharding, Nothing}}, Bool, Bool}; kwargs::@Kwargs{fn_kwargs::@NamedTuple{}, client::Nothing, reshape_propagate::Symbol, raise_first::Bool, assert_nonallocating::Bool, serializable::Bool, legalize_chlo_to_stablehlo::Bool, transpose_propagate::Symbol, donated_args::Symbol, optimize_then_pad::Bool, cudnn_hlo_optimize::Bool, compile_options::Missing, sync::Bool, no_nan::Bool, raise::Bool, shardy_passes::Symbol, optimize::Bool, optimize_communications::Bool})
    @ Reactant.Compiler ~/.julia/dev/Reactant/src/Compiler.jl:3567
 [23] top-level scope
    @ ~/.julia/dev/Reactant/src/Compiler.jl:2642

Copy link

codecov bot commented Oct 12, 2025

Codecov Report

❌ Patch coverage is 75.00000% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 70.41%. Comparing base (b39a1fc) to head (e8915ce).
⚠️ Report is 40 commits behind head on main.

Files with missing lines Patch % Lines
...ReactantSparseArraysExt/ReactantSparseArraysExt.jl 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1739      +/-   ##
==========================================
+ Coverage   68.16%   70.41%   +2.25%     
==========================================
  Files         109      113       +4     
  Lines       11779    12794    +1015     
==========================================
+ Hits         8029     9009     +980     
- Misses       3750     3785      +35     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@albertomercurio
Copy link
Contributor Author

Another approach would be to define the overlayed function for all the possible matrices except sparse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Raising failure of spmv! kernel

2 participants