Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-GPU issues around --no_vgg_loss #28

Closed
elfprince13 opened this issue Apr 18, 2018 · 1 comment
Closed

multi-GPU issues around --no_vgg_loss #28

elfprince13 opened this issue Apr 18, 2018 · 1 comment

Comments

@elfprince13
Copy link
Contributor

elfprince13 commented Apr 18, 2018

This code:

loss_G_GAN_Feat = 0
if not self.opt.no_ganFeat_loss:
feat_weights = 4.0 / (self.opt.n_layers_D + 1)
D_weights = 1.0 / self.opt.num_D
for i in range(self.opt.num_D):
for j in range(len(pred_fake[i])-1):
loss_G_GAN_Feat += D_weights * feat_weights * \
self.criterionFeat(pred_fake[i][j], pred_real[i][j].detach()) * self.opt.lambda_feat
# VGG feature matching loss
loss_G_VGG = 0
if not self.opt.no_vgg_loss:
loss_G_VGG = self.criterionVGG(fake_image, real_image) * self.opt.lambda_feat

Causes trouble when using --no_vgg_loss with multiple GPUs (and I suspect would also cause trouble for --no_ganFeat_loss), because the value 0 is not compatible with the scatter/gather APIs used by torch.nn.DataParallel. I suspect it needs to be Variable containing a 0-dimensional tensor, but I haven't quite figured out how to make it work.

@WayneCho
Copy link

WayneCho commented Mar 27, 2021

I think loss_G_VGG = torch.zeros(1).cuda() may work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants