-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How it works that the self.learnable_vector
is learnable?
#26
Comments
Besides, I calculate the mean(0.0327) and std(0.9966) of |
We appreciate your interest in our work. I apologize for accidentally writing this bug during the code cleanup. This bug has now been fixed. We have also retrained the network and tracked the value and gradient of the learnable vector during the training process. As a result, the gradient of the learnable vector is quite small. |
Thansks! |
I also have the same question about the changes in the parameters of to_k, to_q in cross-attention module,I tried iterating the network many times, but there is no change in the parameters of these two parts. The guess is caused by that only one-dimensional class vectors are selected as cond. https://github.com/Fantasy-Studio/Paint-by-Example/issues/ |
Nice work!
I'm wondering that how you optimize the unconditional vector in
Paint-by-Example/ldm/models/diffusion/ddpm.py
Line 1437 in c435d8a
I find the intention that the unconditional input
self.learnable_vector
is optimized byself.opt.params=self.params_with_white
in line 927 but how it works? I've not found any instructions about the parameterparams
in classtorch.optim.AdamW
.The text was updated successfully, but these errors were encountered: