Skip to content

Add torch.no_grad() to update_bn #52055

@MortenHannemose

Description

@MortenHannemose

🚀 Feature

Add torch.no_grad(): to torch.optim.swa_utils.update_bn.

Motivation

When evaluating my model I get out of memory errors because update_bn allocates memory to do backprop through the SWA model. It took me a long time to figure out why adding update_bn gave me out of memory errors.
The SWA model is mostly (only?) intended for evaluation, and not for training, so very few users will need to backprop on an SWA model. There is no reason for update_bn to allocate memory for a backpropagation that it never does.

Pitch

Add with torch.no_grad(): to the line before model(input) in torch.optim.swa_utils.update_bn and indent model(input)

cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @vincentqb

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNot as big of a feature, but technically not a bug. Should be easy to fixmodule: autogradRelated to torch.autograd, and the autograd engine in generalmodule: optimizerRelated to torch.optimsmallWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions