Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generator loss is different from the original article in Alpha_WGAN_ADNI_train.ipynb notebook. #11

Open
ShadowTwin41 opened this issue Jan 16, 2021 · 4 comments

Comments

@ShadowTwin41
Copy link

In https://arxiv.org/pdf/1908.02498.pdf article the generator loss is just calculated using the d_loss and the l1_loss. The c_loss is just used in lossCodeDiscriminator calculation.
Please, let me know if what I said is correct.

@elevenjiang1
Copy link

So,have you test which is better?
I am now using this work to generate 3D voxel data,but I find that it can not get a good result,but I don't know where error...

@ShadowTwin41
Copy link
Author

ShadowTwin41 commented Nov 9, 2021

I have changed the loss functions of the generator and the discriminator. I recommend you to check if there is a mode collapse (when the discriminator or the generator wins) and look for more work on 3D regeneration. Have you tried using spectral normalisation? That can be a big improvement.
Anyway, I used the c_loss as in this implementation.

@elevenjiang1
Copy link

Thanks for your reply.
I change loss function,remove c_loss in loss1.But still find the effect is very bad !!
I train on ModelNet40_normalized data,only chair class, 3090GPU train for whole morning,but the result is still very bad.Finding that model should in eval() mode can make different noise input and different shape output(but no use in 3D GAN).Is the 3D object Generation difficult than MRI data generation?or maybe some trick I don't know?

Here are my result:
899-res

Before I use 3D GAN and also find the output is mode collapse(different noise input and output same voxel),If I find the problem,what can I do to solve it?Train more times on Generator and less on Discriminator?

Thank you again~

@ShadowTwin41
Copy link
Author

The complexity can also be in the resolution of the images. If you have a high resolution, it will be most difficult to maintain the stability of the training. I think you should have more than one-morning training. This architecture requires a lot of computing power and time. For example, in one of my works, I trained it for 6 days and the results improved exponentially.
To solve the mode collapse problem, you need to find out which model is winning and reduce its "power", for example by changing the learning rate and the number of updates per iteration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants