This repository has been archived by the owner on Nov 22, 2022. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
remove move_state_dict_to_gpu, which is causing cuda oom (#1367)
Summary: Pull Request resolved: #1367 I keep getting cuda oom in this load_best_model stage, move_state_dict_to_gpu and model.cuda() are not both needed. Looks like it will double gpu memory this way. Reviewed By: anchit Differential Revision: D21725316 fbshipit-source-id: 70b5761a25afb19da7f44a3fead37b36d0e122da
- Loading branch information