Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding loading state_dict in finetuning #5

Open
omayrkhan opened this issue Jun 30, 2021 · 0 comments
Open

Regarding loading state_dict in finetuning #5

omayrkhan opened this issue Jun 30, 2021 · 0 comments

Comments

@omayrkhan
Copy link

Firstly, thank you for making the code public, makes it a lot easier for people who want to apply this method to other domains.

Secondly, I'd like to ask if the following is intentional or a copy-paste error:

        netG_A.load_state_dict(self.netG_A.state_dict()) 
        netG_B.load_state_dict(self.netG_A.state_dict())
        netD_A.load_state_dict(self.netD_A.state_dict()) 
        netD_B.load_state_dict(self.netD_A.state_dict()) 

MT-GAN-PyTorch/models/mt_gan_model.py line: 270 -273

Shouldn't netG_B and netD_B be loaded with self.netG_B and self.netD_B state_dicts respectively? If I am not mistakenstate_dict doesn't only contain the architecture information but also the network weights. Also, if it's intentional, then I find it a bit confusing that how models performing opposite operations can share the same weights?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant