Skip to content

Added Gradient penalty and feedback to the Generator from Discriminator

License

Notifications You must be signed in to change notification settings

rushhan/WGAN_GP_with_Feedback-Cross-Attention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WGAN_GP_with_FEEDBACK

Added Gradient penalty and feedback to the Generator from Discriminator

Modifications to the original WGAN implementation:

  1. Added feedback from the discriminator in the form of attention 2.Added Gradient penalty from WGAN_GP implementation (Note that simply adding Gradient penalty to enforce the Lipschitz constraint does not drastically improve results in the original WGAN implementation. You can see this by removing batch normalization in WGAN implementation with added Gradient Penalty)

Execute

python main_grad.py --dataset lsun --dataroot /workspace/lsun --cuda --fdback --noBN --save_dir samples_gp_noBN_fd

Notes

Need the lsun data. Follow the instruction in original implementation link

Sources

WGAN Model based on original paper link. WGAN_GP for added loss for gradient penalty link.

Cross Attention/ Feedback is my own implementation

To-Do

Rerun and recheck the results

Releases

No releases published

Packages

No packages published

Languages