Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Fixed bug in restoring a gpu trained algorithm #35024

Merged

Conversation

kouroshHakha
Copy link
Contributor

Why are these changes needed?

Apparently when you restore the torch optimizer states, the param_groups should not be converted to tensor and moved to cuda devices. (I don't think it's even necessary for the states to be moved to the device they just have to be converted to a tensor). This bug was that when someone trained an algorithm with GPU and wanted to restore for further training again on a cuda device it would yell at you saying the beta parameter in Adam optimizer should not be on cuda or should be a scalar.

This PR fixes that by separating out restoration process of param_groups vs. state keys in state_dict of torch optimizers. It also adds unittests to ensure this edge case is covered in the unittests.

#closes #34159

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
@avnishn
Copy link
Member

avnishn commented May 7, 2023

ill probably need to make this change myself. adding a todo

@kouroshHakha kouroshHakha merged commit 67706f9 into ray-project:master May 8, 2023
architkulkarni pushed a commit to architkulkarni/ray that referenced this pull request May 16, 2023
…5024)

Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RLLib] RuntimeError: Expected scalars to be on CPU, got cuda:0 instead
2 participants