Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about CausalVAE #45

Closed
akdadewrwr opened this issue Jan 27, 2022 · 6 comments
Closed

Questions about CausalVAE #45

akdadewrwr opened this issue Jan 27, 2022 · 6 comments

Comments

@akdadewrwr
Copy link

In CausalVAE code
Is there any special reason why using 4 different decoders?

Besides, for pendulum and flow datasets, the latent space dimension should be equal to the dimension of the label, which is 4. Why the latent space dimension in code is set to be 16?

I checked the Appendix of the paper: we extend the multivariate Gaussian to the matrix Gaussian. Why the model is designed in this way and what are the advantages if we set the VAE in this way?

Many thanks for your response

@huawei-noah huawei-noah deleted a comment Feb 17, 2022
@akdadewrwr
Copy link
Author

akdadewrwr commented Feb 21, 2022

Besides, Could the authors share the code for CelebA experiments?
If the code is shared by request, please share it to alexbeaton724@gmail.com
Thanks a lot!

@shaido987
Copy link
Collaborator

Hello, I have been in contact with one of the authors informing/reminding them of your questions here. There should hopefully be a reply soon.

@26789564
Copy link

26789564 commented Mar 5, 2022

Besides, Could the authors share the code for CelebA experiments? If the code is shared by request, please share it to alexbeaton724@gmail.com Thanks a lot!
The code of celeba experiment is only that the network structure is different from the original one, and the others are the same. Only the dataset needs to be modified

@akdadewrwr
Copy link
Author

thanks for your reply, but I am a little bit confused about the network structure.
In supplementary material, the model structure is:
image

Since the input shape is resized to (3x128x128), the output shape of the Encoder is (3x4x4). Should I flatten it first to shape (48,)? However, according to the supplementary material, the latent space size for CelebA model should be:
image. Should I add a fc layer after the last encoder layer?

Besides, in mask.py, the ConvEncoder and ConvDecoder seem not correct too.

Only providing code for network structure will be very helpful since the CelebA dataset modification will be easy to implement.
Thanks a lot

@shaido987
Copy link
Collaborator

Hello, sorry for the late reply. The first author of CausalVAE has already completed her internship with us. For a faster and more detailed reply, you can email this and any further questions to Mengyue Yang (mengyue.yang.20@ucl.ac.uk). You could include a link to this issue as context.

@alceubissoto
Copy link

Hi @akdadewrwr, did you manage to get a working code for the celeba experiments from the first author? I'm also interested :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants