-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the adversarial loss actually helping? #17
Comments
@jsyoon0823 do you know why this happens? Your input would be appreciated |
Hello, The default hyper-parameters should be optimized for each dataset. |
@jsyoon0823 thank you for the suggestions. What would be a good choice of hyperparameters for the Spam dataset? I am mostly struggling with getting decent performance without the supervised loss. For example, when I set |
Without supervised loss, you need to control the GAN training more seriously. In this case, you need to do some early stopping (or best model saving) with the criteria of supervised loss. Even though you do not directly use the supervised loss for training the model, you can use it for early stopping. It will make you achieve the reported performance. |
Hi Yoon,
I was wondering how the results from table 1 were obtained. I have been playing around with the code and to me, it doesn't seem clear that the adversarial loss is helping (as reported in the results section, concretely table 1).
For example, when I run the code for the SPAM dataset (default implementation and hyperparameters) the RMSE score is ~0.053. However, when I set the adversarial loss to 0 by modifying the following line in
gain.py
:G_loss_temp = -tf.reduce_mean((1-M) * tf.log(D_prob + 1e-8)) * 0.
the RMSE score is also ~0.053. Am I missing anything? I am observing something similar for another dataset. I am considering using GAIN for a project and I would greatly appreciate an explanation on how the results of table 1 were obtained.
The text was updated successfully, but these errors were encountered: