-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Mismatch for Replication #30
Comments
Hi, your understanding is correct, I use the two layers' features. |
As I showed before, btw, the size of constructed latents input (./CFW_trainingdata/latents/xxx.npy) is (1,4,64,64) before training CFW. Is it right? |
I found my problem. The size of constructed latents input (./CFW_trainingdata/latents/xxx.npy) should be (1,4,64,64). |
Hi @kravrolens, I'm also working on the replication right now. From your issue, I thought you have already get succeed on the first fine-tuning stage. I'm personally get stuck on the fine-tuning stage. The problem is that my fine-tuning results did not show very good generation ability. I tested the provided stablesr model by setting the dec_w to be 0.0 and check its fine-tuning results and the results look much better than mine (as shown in issue #36). Did you have similar problem as me? Thank you so much! |
Hi, I have some questions about the network structure of CFW.
As shown in figure 2 in your paper and your code, according to my understanding, you concat two layers' features of the encoder and decoder. The
enc_feat
(immediate features) sizes are :You choose the
enc_fea[2]
andenc_fea[1]
. However thedec_fea
sizes are:It seems that
enc_feat
anddec_fea
can't be concated. Thanks for your help in advance!The text was updated successfully, but these errors were encountered: