Skip to content

olcf/Concurrent_Kernels

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

To fully exploit the computational power of the GPU generally a large amount of data parallelism must be expressed. If your problem does not possess a sufficient amount of data parallelism a second option is to combine data parallelism with task parallelism on the GPU through the use of concurrent kernels. To facilitate task parallelism the NVIDIA Kepler K20x features Hyper-Q, a set of 32 hardware managed work queues. When using CUDA streams each stream will be automatically mapped onto Hyper-Q, allowing up to 32 streams to execute concurrency. The NVIDIA Multi-Process Service allows multiple processes, such as intra-node MPI ranks, to be mapped onto Hyper-Q. This tutorial will demonstrate how to take advantage of GPU concurrency on Titan through the use of Hyper-Q. The full source can be viewed or downloaded from the OLCF GitHub. Please direct any questions or comments to help@nccs.gov

About

These samples provide compilable source code for the OLCF Concurrent Kernels tutorial

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published