Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some parameters don't receive gradients. #6

Closed
guyuchao opened this issue Jun 16, 2022 · 2 comments
Closed

Some parameters don't receive gradients. #6

guyuchao opened this issue Jun 16, 2022 · 2 comments

Comments

@guyuchao
Copy link

Hello, when I am running training command on coco, I encounter the following error:

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by
making sure all forward function outputs participate in calculating loss.

Then I find the parameters in "module.content_codec" and "module.transformer.condition_emb" don't receive grads. So should we set find_unused_parameters=True in DDP?

@tzco
Copy link
Collaborator

tzco commented Jun 30, 2022

I think it is the parameter "empty_text_embed" in diffusion_transformer.py that didn't receive grads. "empty_text_embed" is for learnable classifier-free and sorry we forgot to check. You can add the if to line 148 of diffusion_transformer.py and try again:

if learnable_cf:
    self.empty_text_embed = torch.nn.Parameter(torch.randn(size=(77, 512), requires_grad=True, dtype=torch.float64))

@guyuchao
Copy link
Author

Thanks for your reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants