Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

All keys matched successfully missing when loading state dict on optimizers #39625

Open
bobiblazeski opened this issue Jun 6, 2020 · 2 comments
Labels
module: optimizer Related to torch.optim triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@bobiblazeski
Copy link

bobiblazeski commented Jun 6, 2020

馃悰 Bug

To Reproduce

Steps to reproduce the behavior:

  1. Create a model
  2. Save model
  3. load model state dictionary
  4. Message appears
  5. Create optimizer
  6. Save optimizer
  7. load optimizer state dictionary
  8. The message is missing
import torch 
import torchvision.models as models

alexnet = models.alexnet()
torch.save(alexnet.state_dict(), './alexnet.pth')
alexnet.load_state_dict(torch.load('./alexnet.pth'))
# <All keys matched successfully>
adam = torch.optim.Adam(alexnet.parameters())
torch.save(adam.state_dict(), './adam.pth')
adam.load_state_dict(torch.load('./adam.pth'))

Expected behavior

The message should be printed when optimizer state is loaded

It would be nice if load_state_dict is consistent for both models and optimizers

Environment

Collecting environment information...
PyTorch version: 1.5.0+cu101
Is debug build: No
CUDA used to build PyTorch: 10.1

OS: Ubuntu 20.04 LTS
GCC version: (Ubuntu 8.4.0-3ubuntu2) 8.4.0
CMake version: version 3.17.2

Python version: 3.8
Is CUDA available: Yes
CUDA runtime version: 10.1.105
GPU models and configuration: GPU 0: GeForce RTX 2070
Nvidia driver version: 440.64
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4

Versions of relevant libraries:
[pip3] numpy==1.18.3
[pip3] torch==1.5.0+cu101
[pip3] torchvision==0.6.0+cu101
[conda] Could not collect

Additional context

cc @vincentqb

@VitalyFedyunin VitalyFedyunin added module: optimizer Related to torch.optim triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jun 6, 2020
@VitalyFedyunin
Copy link
Contributor

Up to Optimizers maintainers to consider if we want it, as original way of printing 'message' via __repr__ is questionable

def __repr__(self):

@ssnl
Copy link
Collaborator

ssnl commented Jun 6, 2020

The original does not print the message. Rather it returns an object that represents the loading result, which has different attrs. It looks like printing only in interactive environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: optimizer Related to torch.optim triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants