Skip to content
Implementation of Adversarial Ranking for Language Generation [ArxiV 1705.11001]
Branch: master
Clone or download
Pull request Compare This branch is 24 commits ahead, 2 commits behind LantaoYu:master.
Latest commit b69ce4e Mar 10, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
figures minor Sep 21, 2016
save bug fix. Mar 10, 2018
.gitignore Update readme. Mar 10, 2018 Update Mar 10, 2018 NEWS data. Sep 2, 2017 Update Mar 10, 2018 Mutable seqlen. Sep 2, 2017 Update Mar 10, 2018 Mutable seqlen. Sep 2, 2017 Optimized speed. Mar 10, 2018 Optimized speed. Mar 10, 2018



  • Tensorflow r1.6.0
  • Python 3.x
  • CUDA 9.0 (For GPU)


Apply Generative Adversarial Nets to generating sequences of discrete tokens with optimization via replacing the discriminator with a ranker.

The previous research paper SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient has been accepted at the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17).

The research paper Adversarial Ranking for Language Generation has been accepted at 31st Conference on Neural Information Processing Systems (NIPS 2017).

We reproduce example codes to repeat the synthetic data experiments with oracle evaluation mechanisms. To run the experiment with default parameters:

$ python

You can change the all the parameters in

The experiment has two stages. In the first stage, use the positive data provided by the oracle model and Maximum Likelihood Estimation to perform supervise learning. In the second stage, use adversarial training to improve the generator.

Note: this code is based on the previous work by ofirnachum and SeqGAN .

You can’t perform that action at this time.