Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add nvtx.range() context manager #42925

Closed
wants to merge 1 commit into from

Conversation

chrish42
Copy link
Contributor

Small quality-of-life improvement to NVTX Python bindings, that we're using internally and that would be useful to other folks using NVTX annotations via PyTorch. (And my first potential PyTorch contribution.)

Instead of needing to be careful with try/finally to make sure all your range_push'es are range_pop'ed:

nvtx.range_push("Some event")
try:
    # Code here...
finally:
    nvtx.range_pop()

you can simply do:

with nvtx.range("Some event"):
    # Code here...

or even use it as a decorator:

class MyModel(nn.Module):

    # Other methods here...

    @nvtx.range("MyModel.forward()")
    def forward(self, *input):
        # Forward pass code here...

A couple small open questions:

  1. I also added the ability to call msg.format() inside range(), with the intention that, if there is nothing listening to NVTX events, we should skip the string formatting, to lower the overhead in that case. If you like that idea, I could add the actual "skip string formatting if nobody is listening to events" parts. We can also just leave it as is. Or I can remove that if you folks don't like it. (In the first two cases, should we add that to range_push() and mark() too?) Just let me know which one it is, and I'll update the pull request.

  2. I don't think there are many places for bugs to hide in that function, but I can certainly add a quick test, if you folks want.

@dr-ci
Copy link

dr-ci bot commented Aug 12, 2020

💊 CI failures summary and remediations

As of commit 352a19e (more details on the Dr. CI page):



🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 4 times.

@zhangguanheng66 zhangguanheng66 added module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Aug 12, 2020
@chrish42
Copy link
Contributor Author

chrish42 commented Sep 8, 2020

Hi. I see that the code review was approved. (Thanks!) Kinda new to PyTorch contributing. Is there anything else that's needed from my part to have this merged? I can't merge it myself ("base branch restricts merging to authorized users"), at least. Just making sure this change doesn't get lost...

@chrish42
Copy link
Contributor Author

@peterjc123 @zhangguanheng66 This is my first PyTorch contribution, so I'm not sure what is or isn't a normal delay between pull request approval and merging. Is it one of you folks that merges this? Just making sure this hasn't slipped through the cracks... Thanks!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 511f89e.

@chrish42 chrish42 deleted the add-nvtx-range-ctxtmanager branch January 26, 2022 20:18
pytorchmergebot pushed a commit that referenced this pull request Mar 15, 2024
The context manager `torch.cuda.nvtx.range` has been around for about 4 years (see #42925). Unfortunately, it was never documented and as a consequence users are just unaware of it (see #121663).

Pull Request resolved: #121699
Approved by: https://github.com/janeyx99
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: cuda Related to torch.cuda, and CUDA support in general open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants