Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

NVTX Push and Pop Context Manager? #121663

Closed
wkaisertexas opened this issue Mar 11, 2024 · 3 comments
Closed

NVTX Push and Pop Context Manager? #121663

wkaisertexas opened this issue Mar 11, 2024 · 3 comments
Labels
module: cuda Related to torch.cuda, and CUDA support in general topic: docs topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@wkaisertexas
Copy link

wkaisertexas commented Mar 11, 2024

馃殌 The feature, motivation and pitch

Could the two functions:

  • torch.cuda.nvtx.range_push("msg")
  • torch.cuda.nvtx.range_pop()

Be combined into a easy-to-use context manager?

"""
from chatgpt
"""
import torch.cuda.nvtx
from contextlib import contextmanager

@contextmanager
def nvtx_context(message):
    try:
        torch.cuda.nvtx.range_push(message)
        yield
    finally:
        torch.cuda.nvtx.range_pop()

# Example usage:
with nvtx_context("my_message"):
    # Code block where NVTX range is active
    pass

Doing this would be dead simple and make using range push and pop less error-prone.

Alternatives

No response

Additional context

No response

cc @ptrblck

@Aidyn-A
Copy link
Collaborator

Aidyn-A commented Mar 12, 2024

Hi @wkaisertexas,
Thanks for submitting the feature request. Fortunately, there is already a context manager that does exactly what you described:

@contextmanager
def range(msg, *args, **kwargs):
"""
Context manager / decorator that pushes an NVTX range at the beginning
of its scope, and pops it at the end. If extra arguments are given,
they are passed as arguments to msg.format().
Args:
msg (str): message to associate with the range
"""
range_push(msg.format(*args, **kwargs))
try:
yield
finally:
range_pop()

Although, I am afraid it is not documented.

@drisspg drisspg added module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module topic: docs topic category labels Mar 13, 2024
pytorchmergebot pushed a commit that referenced this issue Mar 15, 2024
The context manager `torch.cuda.nvtx.range` has been around for about 4 years (see #42925). Unfortunately, it was never documented and as a consequence users are just unaware of it (see #121663).

Pull Request resolved: #121699
Approved by: https://github.com/janeyx99
@Aidyn-A
Copy link
Collaborator

Aidyn-A commented Mar 18, 2024

@wkaisertexas, as the torch.cuda.nvtx.range context manager is now being documented https://pytorch.org/docs/main/generated/torch.cuda.nvtx.range.html I am closing this issue. Please fell free to comment and re-open this issue if necessary.

@Aidyn-A Aidyn-A closed this as completed Mar 18, 2024
@wkaisertexas
Copy link
Author

Yeah, I saw that here:

#121699

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cuda Related to torch.cuda, and CUDA support in general topic: docs topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants