Skip to content

This is the PyTorch implementation of Seq2Seq model for neural dialog generation

License

Notifications You must be signed in to change notification settings

zhongpeixiang/seq2seq-pytorch

Repository files navigation

Dataset statistics

1. Cornell dialog dataset:

Vocab size: 29769

2. OpenSubtitles2016:

Vocab size: 25000 with an unknow token

Experiments

1. Speed analysis:

1.1 Text preprocessing: creating pairs took around 1/5 of time; index words took 4/5 of time 1.2 Training: decoding took around 5/6 of time; optimization took around 1/6 of time

2. Accuracy analysis:

To do:

  1. RNN training optimization
  2. Multi-GPU
  3. Professor-forcing

About

This is the PyTorch implementation of Seq2Seq model for neural dialog generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages