-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about loss decline #7
Comments
The d_loss will eventually diminish to 0.001. So the strategy was to re-train the model again after 100 epochs by loading the coarse_generator and fine_generator's weight. But we did not load the discriminator's weights while retraining. Unlike classification and segmentation architectures, GAN based generators wont show a pattern of diminishing loss (from high to low). You can read more about how to train a GAN or other GAN hacks by Soumith Chintala (creator of PyTorch and DC-GAN) here: So a good way to see if your generators are predicting good segmentation output is to visualize the local_plot.png and global_plot.png generated after each epoch in our code. This will show you vessel segmentation output of random images after each epoch. If the output images are good, it means the generators are learning to translate between the two modality for that specific weight. For further validation, we loop over all the saved generator weights and print the associated metrics ( sensitivity, specificity, AUC, SSIM etc. ) to see which weight pair (both coarse and fine) gives us the best segmentation output for the test images. Hope this helps, thanks ! |
thanks a lot! |
May I ask what kind of value range will your losses eventually stabilize in, including d1, d2, ect?
The value g_local loss has remained steady at about 3 when I try to train the Drive Dataset from 18 to 200 epochs.
I wonder what's the reason for that
The text was updated successfully, but these errors were encountered: