Here is a suspected bug:
What is the top-level directory of the model you are using: adversarial_text
In models/research/adversarial_text/layers.py, in classification_loss (line 220) we seem to be using sigmoid_cross_entropy, but in predictions (line 256) we compare with 0.5.
Shouldn't the logits be compared with 0, or the sigmoid of the logits compared with 0.5?
Otherwise this threshold of 0.5 for the logits seems arbitrary and I'm not sure why it should work well with every dataset.
Is there something I'm misunderstanding about the setup of the experiments?