Skip to content
This repository has been archived by the owner on Nov 22, 2022. It is now read-only.

remove move_state_dict_to_gpu, which is causing cuda oom #1367

Closed
wants to merge 1 commit into from

Conversation

liaimi
Copy link

@liaimi liaimi commented May 26, 2020

Summary: I keep getting cuda oom in this load_best_model stage, move_state_dict_to_gpu and model.cuda() are not both needed. Looks like it will double gpu memory this way.

Differential Revision: D21725316

Summary: I keep getting cuda oom in this load_best_model stage, move_state_dict_to_gpu and model.cuda() are not both needed. Looks like it will double gpu memory this way.

Differential Revision: D21725316

fbshipit-source-id: ac11d5374c8d943025810550ef627a8cd349b48c
@facebook-github-bot facebook-github-bot added CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported labels May 26, 2020
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D21725316

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in b0a9d80.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants