Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expose sparse mv/mm algo selection #1201

Merged
merged 16 commits into from
May 9, 2022

Conversation

Roger-luo
Copy link
Contributor

@Roger-luo Roger-luo commented Oct 14, 2021

this is a simple change to expose the algorithm parameter to mv/mm level

edit: I'm not sure if we should check if the selected algo match the input, e.g some algo only work on CSR with col/row-major

@maleadt maleadt added needs tests Tests are requested. cuda libraries Stuff about CUDA library wrappers. labels Oct 15, 2021
@Roger-luo
Copy link
Contributor Author

I thought the current tests cover this already tho? Or we should test on other algorithms?

@maleadt
Copy link
Member

maleadt commented Oct 15, 2021

I thought the current tests cover this already tho?

How? Didn't you introduce the algo keyword here?

test/cusparse/generic.jl Outdated Show resolved Hide resolved
test/cusparse/generic.jl Outdated Show resolved Hide resolved
lib/cusparse/generic.jl Outdated Show resolved Hide resolved
lib/cusparse/generic.jl Outdated Show resolved Hide resolved
@Roger-luo
Copy link
Contributor Author

hmm I'm a bit confused why the test is failing on CUDA 11.1, I've filter the tests to not run below CUDA 11.2, but somehow it still get executed on those versions, what would be right way of doing this?

@maleadt
Copy link
Member

maleadt commented Oct 25, 2021

CUDA.version() returns the driver version. You need to check against CUSPARSE.version() (which requires you to use the library version, so look up what the appropriate version is corresponding with CUDA toolkit 11.2 -- it's in the release notes, or you can check yourself by setting JULIA_CUDA_VERSION=11.2).

test/cusparse/generic.jl Outdated Show resolved Hide resolved
@maleadt
Copy link
Member

maleadt commented Oct 26, 2021

Please rebase on master to not require Windows to pass.

@codecov
Copy link

codecov bot commented May 4, 2022

Codecov Report

Merging #1201 (3bad02d) into master (07e8bed) will decrease coverage by 0.74%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master    #1201      +/-   ##
==========================================
- Coverage   77.36%   76.62%   -0.75%     
==========================================
  Files         120      120              
  Lines        9274     9273       -1     
==========================================
- Hits         7175     7105      -70     
- Misses       2099     2168      +69     
Impacted Files Coverage Δ
lib/cusparse/generic.jl 93.06% <ø> (ø)
lib/cutensor/error.jl 27.27% <0.00%> (-64.40%) ⬇️
lib/cudnn/CUDNN.jl 37.50% <0.00%> (-35.94%) ⬇️
lib/cublas/CUBLAS.jl 50.00% <0.00%> (-25.44%) ⬇️
src/utilities.jl 68.91% <0.00%> (-4.06%) ⬇️
lib/cudadrv/CUDAdrv.jl 51.66% <0.00%> (-3.34%) ⬇️
lib/cudadrv/module/linker.jl 68.75% <0.00%> (-3.13%) ⬇️
lib/cudadrv/memory.jl 78.59% <0.00%> (-1.01%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 07e8bed...3bad02d. Read the comment docs.

@maleadt maleadt removed the needs tests Tests are requested. label May 9, 2022
@maleadt
Copy link
Member

maleadt commented May 9, 2022

LGTM, thanks!

@maleadt maleadt merged commit 2987086 into JuliaGPU:master May 9, 2022
@Roger-luo Roger-luo deleted the roger/generalize-sparse-algo branch May 9, 2022 20:21
simonbyrne pushed a commit to simonbyrne/CUDA.jl that referenced this pull request Nov 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda libraries Stuff about CUDA library wrappers.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants