Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add stream support to CUDA and HIP #1236

Merged
merged 23 commits into from
Mar 10, 2023
Merged

Add stream support to CUDA and HIP #1236

merged 23 commits into from
Mar 10, 2023

Conversation

upsj
Copy link
Member

@upsj upsj commented Dec 11, 2022

TODO:

  • test this behavior, see cuDF tests for how to do it
  • add stream to sparselib/blas handles
  • add RAII stream handle types for CUDA and HIP

Closes #1206

@upsj upsj added the 1:ST:WIP This PR is a work in progress. Not ready for review. label Dec 11, 2022
@upsj upsj self-assigned this Dec 11, 2022
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:hip This is related to the HIP module. reg:testing This is related to testing. type:factorization This is related to the Factorizations type:matrix-format This is related to the Matrix formats type:multigrid This is related to multigrid type:preconditioner This is related to the preconditioners type:solver This is related to the solvers type:stopping-criteria This is related to the stopping criteria labels Dec 11, 2022
@upsj upsj added 1:ST:ready-for-review This PR is ready for review and removed 1:ST:WIP This PR is a work in progress. Not ready for review. labels Dec 13, 2022
cuda/base/curand_bindings.hpp Outdated Show resolved Hide resolved
@upsj upsj requested a review from a team December 15, 2022 10:44
@MarcelKoch MarcelKoch self-requested a review January 18, 2023 15:48
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens with a user provided stream, if device_reset = true in a cuda/hip executor? For the cuda docs, it seems like using the stream again is undefined behavior.

@upsj
Copy link
Member Author

upsj commented Jan 20, 2023

yes, to be honest, I think the device_reset settings exceed the scope of a single executor and should be removed in the long term. I like to think of devices like the MPI environment: They should only be initialized and finalized globally. Streams and devices should usually have a wider scope than executors, unless they are explicitly created internally.
Similarly, if you have multiple executors on a device, and one with device_reset enabled gets destroyed, all the other executors' memory gets freed as well, crashing the whole program.

@MarcelKoch
Copy link
Member

That sounds like a reasonable approach. Perhaps we could deprecate the constructor with the device_reset in another PR.
Right now, maybe we could issue a warning if a user sets device_reset and also a custom stream, or don't offer a constructor with both device_reset and stream parameter.
Sidenote: Our cuda/hip executors count the number of active executors on a specific device, and only does the reset if there are no other active executors. So that case should be handled 'correctly', but it might ignore device_reset even if it was set by a user.

@upsj
Copy link
Member Author

upsj commented Jan 20, 2023

Ah, thanks for the reminder, that's actually slight worse IMO: This way the order of destruction matters for whether DeviceReset gets called.

Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing this is a bit hard because there are so many smaller changes. I've done some regex searches, and it seems that there are no kernel launches without the stream parameter anymore. (Unless we are not using the <<<...>>> syntax somewhere, but I don't think so.) And the proposed testing might catch the cases we missed.
Other than that, the interface changes on the executor side look good. Perhaps we could add a constructor, so that we have one constructor with the device_reset parameter, but without the stream and vice versa.

@upsj upsj requested a review from yhmtsai February 6, 2023 13:25
@upsj upsj force-pushed the stream_support branch 2 times, most recently from a6cd063 to 0614150 Compare February 11, 2023 21:57
@upsj
Copy link
Member Author

upsj commented Feb 11, 2023

format!

@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Mar 8, 2023
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are some changes from other PRs here.

@sonarcloud
Copy link

sonarcloud bot commented Mar 9, 2023

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 26 Code Smells

57.1% 57.1% Coverage
20.8% 20.8% Duplication

@codecov
Copy link

codecov bot commented Mar 9, 2023

Codecov Report

Patch coverage: 83.07% and project coverage change: -0.43 ⚠️

Comparison is base (5ac0478) 91.52% compared to head (24a0953) 91.10%.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1236      +/-   ##
===========================================
- Coverage    91.52%   91.10%   -0.43%     
===========================================
  Files          567      567              
  Lines        48204    48268      +64     
===========================================
- Hits         44121    43973     -148     
- Misses        4083     4295     +212     
Impacted Files Coverage Δ
include/ginkgo/core/log/profiler_hook.hpp 100.00% <ø> (ø)
core/device_hooks/cuda_hooks.cpp 38.63% <20.00%> (-3.87%) ⬇️
core/device_hooks/hip_hooks.cpp 40.47% <20.00%> (-4.27%) ⬇️
core/solver/multigrid.cpp 88.28% <50.00%> (+0.04%) ⬆️
test/utils/mpi/executor.hpp 90.00% <85.71%> (-10.00%) ⬇️
core/log/profiler_hook.cpp 73.91% <100.00%> (+2.87%) ⬆️
core/solver/ir.cpp 87.09% <100.00%> (ø)
core/test/log/profiler_hook.cpp 95.74% <100.00%> (+0.62%) ⬆️
include/ginkgo/core/base/executor.hpp 75.59% <100.00%> (ø)
test/utils/executor.hpp 100.00% <100.00%> (ø)

... and 5 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@upsj upsj merged commit c8d4be5 into develop Mar 10, 2023
@upsj upsj deleted the stream_support branch March 10, 2023 04:25
tcojean added a commit that referenced this pull request Jun 16, 2023
Release 1.6.0 of Ginkgo.

The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as:
- Several building blocks for GPU-resident sparse direct solvers like symbolic
  and numerical LU and Cholesky factorization, ...,
- A distributed Schwarz preconditioner,
- New FGMRES and GCR solvers,
- Distributed benchmarks for the SpMV operation, solvers, ...
- Support for non-default streams in the CUDA and HIP backends,
- Mixed precision support for the CSR SpMV,
- A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to
  provide internal Ginkgo knowledge to most HPC profilers!

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version Support Changes
+ ROCm 4.0+ -> 4.5+ after [#1303](#1303)
+ Removed Cygwin pipeline and support [#1283](#1283)

### Interface Changes
+ Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234)
+ The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284)
+ The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327)
+ The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327)

### Deprecations
+ Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288)

### Added Features
+ A distributed Schwarz preconditioner. [#1248](#1248)
+ A GCR solver [#1239](#1239)
+ Flexible Gmres solver [#1244](#1244)
+ Enable Gmres solver for distributed matrices and vectors [#1201](#1201)
+ An example that uses Kokkos to assemble the system matrix [#1216](#1216)
+ A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210)
+ A numerical Cholesky factorization [#1215](#1215)
+ Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232)
+ Symbolic factorization benchmark [#1302](#1302)
+ The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055)
+ `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313)
+ It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228)
+ A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204)
+ Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204)
+ Benchmarks for CSR sorting and lookup [#1219](#1219)
+ A timer for MPI benchmarks that reports the longest time [#1217](#1217)
+ A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294)
+ Support for non-default streams in CUDA and HIP executors [#1236](#1236)
+ METIS integration for nested dissection reordering [#1296](#1296)
+ SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328)
+ Csr mixed-precision SpMV support [#1319](#1319)
+ A `with_loggers` function for all `Factory` parameters [#1337](#1337)

### Improvements
+ Improve naming of kernel operations for loggers [#1277](#1277)
+ Annotate solver iterations in `ProfilerHook` [#1290](#1290)
+ Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342)
+ Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261)
+ Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303)
+ Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325)

### Fixes
+ Fix OpenMPI version check [#1200](#1200)
+ Fix the mpi cxx type binding by c binding [#1306](#1306)
+ Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249)
+ Disable thread pinning with GPU executors due to poor performance [#1230](#1230)
+ Fix hwloc version detection [#1266](#1266)
+ Fix PAPI detection in non-implicit include directories [#1268](#1268)
+ Fix PAPI support for newer PAPI versions: [#1321](#1321)
+ Fix pkg-config file generation for library paths outside prefix [#1271](#1271)
+ Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251)
+ Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272)
+ Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263)
+ Fix segmentation fault in benchmark matrix construction [#1299](#1299)
+ Fix the stencil matrix creation for benchmarking [#1305](#1305)
+ Fix the additional residual check in IR [#1307](#1307)
+ Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331)
+ Fix Isai generation for large sparsity powers [#1327](#1327)
+ Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331)
+ Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. 1:ST:run-full-test mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:hip This is related to the HIP module. reg:testing This is related to testing. type:factorization This is related to the Factorizations type:matrix-format This is related to the Matrix formats type:multigrid This is related to multigrid type:preconditioner This is related to the preconditioners type:solver This is related to the solvers type:stopping-criteria This is related to the stopping criteria
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support custom streams in CUDA and HIP
5 participants