-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Among cudaEventRecord and nvtxRangePush, whose overhead is smaller? #3
Comments
You may find these CUDA event benchmarks useful: https://github.com/harrism/cuda_event_benchmark. In my tests on V100 / Ubuntu 18.04 / CUDA 10.2 / AMD Ryzen 7 3700, a default event record (timing enable) has a throughput of about 400K records per second. With timing disabled it is 10x faster, but if you are intending to use events for timing, go with the lower throughput. The event approach works well with tools like Google Benchmark, however you may need to take extra steps to flush the L2 cache between kernels if you need to benchmark the performance assuming a cold cache. You can see how we do this in RAPIDS libcudf benchmarks with this class, and an example benchmark that uses it: Note that using CUDA events for timing may be inaccurate if there are concurrent kernels running. (?) I do think that the overhead of NVTX is nearly zero when nsys or other tools are not attached. We use it and I haven't noticed a penalty. Typically we wrap up the calls in utility functions that we have the option of disabling with a preprocessor definition. |
I'm shock by your show up! Your developer blogs in early years are really helpful and I learnt a lot from them. Thank you very much! |
No worries, happy to help. I think your understanding is correct. |
Hello!
When I want to timing a CUDA kernel, one way is using cudaEventRecord before and after the kernel launch, the other way is using nvtxRangePush before and nvtxRangePop after the kernel. Which one is better? Which one has less additional overhead?
In NVTX docs, I also found the words "The library introduces close to zero overhead if no tool is attached to the application. The overhead when a tool is attached is specific to the tool". Does "tool" here mean nvprof or nsys?
The text was updated successfully, but these errors were encountered: