-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Closed
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generalmodule: optimizerRelated to torch.optimRelated to torch.optimsmallWe think this is a small issue to fix. Consider knocking off high priority small issuesWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🚀 Feature
Add torch.no_grad():
to torch.optim.swa_utils.update_bn
.
Motivation
When evaluating my model I get out of memory errors because update_bn
allocates memory to do backprop through the SWA model. It took me a long time to figure out why adding update_bn
gave me out of memory errors.
The SWA model is mostly (only?) intended for evaluation, and not for training, so very few users will need to backprop on an SWA model. There is no reason for update_bn
to allocate memory for a backpropagation that it never does.
Pitch
Add with torch.no_grad():
to the line before model(input)
in torch.optim.swa_utils.update_bn
and indent model(input)
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @vincentqb
Metadata
Metadata
Assignees
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generalmodule: optimizerRelated to torch.optimRelated to torch.optimsmallWe think this is a small issue to fix. Consider knocking off high priority small issuesWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module