Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Among cudaEventRecord and nvtxRangePush, whose overhead is smaller? #3

Closed
gaoteng-git opened this issue Jun 30, 2020 · 3 comments
Closed

Comments

@gaoteng-git
Copy link

gaoteng-git commented Jun 30, 2020

Hello!
When I want to timing a CUDA kernel, one way is using cudaEventRecord before and after the kernel launch, the other way is using nvtxRangePush before and nvtxRangePop after the kernel. Which one is better? Which one has less additional overhead?
In NVTX docs, I also found the words "The library introduces close to zero overhead if no tool is attached to the application. The overhead when a tool is attached is specific to the tool". Does "tool" here mean nvprof or nsys?

@harrism
Copy link

harrism commented Aug 26, 2020

You may find these CUDA event benchmarks useful: https://github.com/harrism/cuda_event_benchmark. In my tests on V100 / Ubuntu 18.04 / CUDA 10.2 / AMD Ryzen 7 3700, a default event record (timing enable) has a throughput of about 400K records per second. With timing disabled it is 10x faster, but if you are intending to use events for timing, go with the lower throughput.

The event approach works well with tools like Google Benchmark, however you may need to take extra steps to flush the L2 cache between kernels if you need to benchmark the performance assuming a cold cache. You can see how we do this in RAPIDS libcudf benchmarks with this class, and an example benchmark that uses it:
https://github.com/rapidsai/cudf/blob/f78f80e94c74c08fface696cfd7e03881b9b0380/cpp/benchmarks/transpose/transpose_benchmark.cu#L46-L49

Note that using CUDA events for timing may be inaccurate if there are concurrent kernels running. (?)

I do think that the overhead of NVTX is nearly zero when nsys or other tools are not attached. We use it and I haven't noticed a penalty. Typically we wrap up the calls in utility functions that we have the option of disabling with a preprocessor definition.

@gaoteng-git
Copy link
Author

You may find these CUDA event benchmarks useful: https://github.com/harrism/cuda_event_benchmark. In my tests on V100 / Ubuntu 18.04 / CUDA 10.2 / AMD Ryzen 7 3700, a default event record (timing enable) has a throughput of about 400K records per second. With timing disabled it is 10x faster, but if you are intending to use events for timing, go with the lower throughput.

The event approach works well with tools like Google Benchmark, however you may need to take extra steps to flush the L2 cache between kernels if you need to benchmark the performance assuming a cold cache. You can see how we do this in RAPIDS libcudf benchmarks with this class, and an example benchmark that uses it:
https://github.com/rapidsai/cudf/blob/f78f80e94c74c08fface696cfd7e03881b9b0380/cpp/benchmarks/transpose/transpose_benchmark.cu#L46-L49

Note that using CUDA events for timing may be inaccurate if there are concurrent kernels running. (?)

I do think that the overhead of NVTX is nearly zero when nsys or other tools are not attached. We use it and I haven't noticed a penalty. Typically we wrap up the calls in utility functions that we have the option of disabling with a preprocessor definition.

I'm shock by your show up! Your developer blogs in early years are really helpful and I learnt a lot from them. Thank you very much!
When concurrent kernels are running, the elapsed time that cuda events record on the same kernel code may be very different in different running tries, because of other concurrent kernels' competition on the same GPU cores resource. Is my understanding right?
Thanks a lot again for your help!

@harrism
Copy link

harrism commented Aug 28, 2020

No worries, happy to help. I think your understanding is correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants