Skip to content

tassossapalidis/latextgan

Repository files navigation

Unsupervised Text Generation Using Generative Adversarial Networks

Stanford CS230: Deep Learning

Authors: Anna Shors, Andrew Freeman, Anastasios Sapalidis

Overview

Please read our Project Proposal, Milestone Report, or Final Report for details on this project. The abstract is included here:

Despite their success in other fields, General Adversarial Networks have gained little traction in the field of text generation due to the need to choose words to generate at each timestep. This discrete "picking" function poses a challenge, as it prevents gradients from being propagated from the discriminator to the generator. In this paper, we explore an alternative method of using GANs for text generation in which the generator works to directly output sentence encodings that can be decoded using a pretrained decoder. While our generated sentences lack the fluency of the language model baseline, we show that this method has the potential to generate creative, realistic sentences and would benefit from further exploration in future works.

File Usage

Preprocess Data

In order to be used as inputs for the following process, the data must be in the form of a text file with one sentence per line. data_cleaning.py can be used as a template script to convert raw text data to this format.

Training the Autoencoder

  1. Configure the autoencoder training parameters and architecture using autoencoder_parameters.json.
  2. Run autoencoder.py, using the training data and validation data text files as command-line arguments.
    • This will produce model checkpoints stored in the directory specified in step 1.

Testing the Autoencoder

Specify a number of randomly selected validation set sentences to be tested, and whether or not you wish for sentence-level results to be displayed, at the top of test_sentences.py, and run this file. Use the training data and validation data text files as command-line arguments.

  • If you selected "True" for the second argument, the script will output the original sentence, the autoencoder output, and BLEU scores for each of the number of sentences specified in the first argument.
  • If you selected "False" for the second argument, the script will only output the average BLEU scores for all of the selected sentences .

Training the GAN

  1. Configure the GAN training parameters and architecture using gan_parameters.json.
  2. Run train_gan.py, using the training data text file as a command-line argument.
    • Details on the structures of the GAN components can be found in gan.py.
    • This will produce model checkpoints stored in the directory specified in step 1.

Testing the GAN

After a number of sample sentences have been generated by the GAN, BLEU scores can be calculated using get_corpus_bleu.py. Specify whether or not you wish for sentence-level results to be displayed at the top of the file, then run. Use the training data text file and a "test sentences" text file in the same format, containing sentences generated by the GAN, as command-line arguments.

About

CS230 latextgan implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages