Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Benchmark] Fix RB benchmarks #1760

Merged
merged 5 commits into from
Dec 22, 2023
Merged

[Benchmark] Fix RB benchmarks #1760

merged 5 commits into from
Dec 22, 2023

Conversation

vmoens
Copy link
Contributor

@vmoens vmoens commented Dec 22, 2023

cc @harnvo

I changed the names so that we can have a fresh look at the benchmarks on the benchmark tool.
I will try to remove the old traces that are wrong (need to look at how I can modify the xml files on gh-pages)

Copy link

pytorch-bot bot commented Dec 22, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/1760

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (7 Unrelated Failures)

As of commit 9e2ee73 with merge base 6d217c6 (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 22, 2023
@vmoens vmoens linked an issue Dec 22, 2023 that may be closed by this pull request
3 tasks
@vmoens vmoens added bug Something isn't working Benchmarks rl/benchmark changes labels Dec 22, 2023
@vmoens vmoens marked this pull request as ready for review December 22, 2023 07:58
@harnvo
Copy link

harnvo commented Dec 22, 2023

Hi, just a suggestion. The partial fix proposed in my issue is just for easy illustration but might not be appropriate for a merge.

Somehow pytest do not recognize the class name anymore after partial(TensorDictPrioritizedReplayBuffer, alpha=1, beta=0.9). (In practice you would see pb6, pb7, ... or something as their class name. ) This adds confusion when benchmarking (especially when you would like to add more tests with partial!!).

This is because partical change the __name__ of a class, and I would like to suggest solving this issue by doing the following:

_TensorDictPrioritizedReplayBuffer = partial(TensorDictPrioritizedReplayBuffer, alpha=1, beta=0.9)
# preserve the name of the class even after partial
_TensorDictPrioritizedReplayBuffer.__name__ = TensorDictPrioritizedReplayBuffer.__name__

and then change the partial(TensorDictPrioritizedReplayBuffer, alpha=1, beta=0.9) within parameterize into _TensorDictPrioritizedReplayBuffer

@vmoens
Copy link
Contributor Author

vmoens commented Dec 22, 2023

@harnvo edited following your suggestion, would mean a lot if you could submit your review!

@@ -17,6 +18,12 @@
)
from torchrl.data.replay_buffers import RandomSampler, SamplerWithoutReplacement

_TensorDictPrioritizedReplayBuffer = partial(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: functools.partial

Copy link

@harnvo harnvo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

populated=True,
size=size,
)()
benchmark(sample, rb)


def infinite_iter(obj):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems there exists a infinite loop here and the test_rb_iterate could not end the 4th test. (Possibly due to the circular sampling in SamplerWithoutReplacement)

The old iterate works fine and I think you should change this back.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I wanted to avoid the cost of entering __iter__ but it's still better than an infinite loop!

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, the following would work:

def iterate(rb_iter):
    try:
        next(rb_iter)
    except StopIteration:
        pass

and inside test_rb_iterate:

benchmark(iterate, iter(rb))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that I was call iter on iter, not solving the problem that the first iter was empty. I will push a fix in a sec

Copy link

@harnvo harnvo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works in my PC

@vmoens
Copy link
Contributor Author

vmoens commented Dec 22, 2023

@harnvo it should run now!

@vmoens vmoens merged commit 64434df into main Dec 22, 2023
56 of 63 checks passed
@vmoens vmoens deleted the fix-bench branch December 22, 2023 13:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Benchmarks rl/benchmark changes bug Something isn't working CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Wrong Benchmark on Replaybuffer.
3 participants