Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add vae_text example #133

Merged
merged 13 commits into from Aug 12, 2019
Merged

Add vae_text example #133

merged 13 commits into from Aug 12, 2019

Conversation

TomNong
Copy link
Collaborator

@TomNong TomNong commented Jul 30, 2019

  • Port vae_text from texar-TF.

  • Add external distribution
    MultivariateNormalDiag.

  • Add preprocessing for data batch.

  • Modify None checking condition for initial_state in
    RNNDecoderBase.

  • Modify max_pos for config_trans_yahoo.py.

  • Modify connectors mlp function.

  • Refactor vae_text training & generation decoder.

  • Refactor vae_text decoder embeddings.

  • Refactor to import texar.torch.

  • Polish code.

Shibiao Nong added 3 commits July 30, 2019 14:29
* Port vae_text from texar-TF.

* Add external distribution
MultivariateNormalDiag.

* Add preprocessing for data batch.

* Modify None checking condition for initial_state in
RNNDecoderBase.

* Modify max_pos for config_trans_yahoo.py.

* Modify connectors mlp function.

* Refactor vae_text training & generations.

* Refactor vae_text decoder embeddings.

* Polish code.
@TomNong TomNong requested a review from huzecong July 30, 2019 22:17
examples/vae_text/README.md Outdated Show resolved Hide resolved
examples/vae_text/README.md Outdated Show resolved Hide resolved
examples/vae_text/config_lstm_ptb.py Outdated Show resolved Hide resolved
examples/vae_text/vae_train.py Outdated Show resolved Hide resolved
examples/vae_text/vae_train.py Outdated Show resolved Hide resolved
examples/vae_text/vae_train.py Outdated Show resolved Hide resolved
examples/vae_text/vae_train.py Outdated Show resolved Hide resolved
examples/vae_text/vae_train.py Outdated Show resolved Hide resolved
texar/torch/modules/connectors/connectors.py Outdated Show resolved Hide resolved
texar/torch/modules/connectors/connectors.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@huzecong huzecong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally, it looks good to me now. However, the performance on the Yahoo dataset is significantly worse than the TF version. Do you have assumptions as to why that is?

@TomNong
Copy link
Collaborator Author

TomNong commented Aug 5, 2019

@huzecong thanks. One possible reason I think is that the checkpoint needs to store more information than current version, since we are using more than learning_rate to determine the stop of the training.

@ZhitingHu
Copy link
Member

Then why can't we store more info in order to recover the results? what's the difficulty here?

@TomNong
Copy link
Collaborator Author

TomNong commented Aug 5, 2019

@ZhitingHu yeah... It is actually used to handle the case that training process pauses. I think we can add a parameter for continue training checkpoints, I'll make that.

@ZhitingHu
Copy link
Member

Did you mean the current code can reproduce the TF results if training is not paused in the middle?

@TomNong
Copy link
Collaborator Author

TomNong commented Aug 5, 2019

@ZhitingHu Yes, actually "Yahoo-Lstm" once reached ppl 68.95 in the middle, which is better then current 75.21.

@ZhitingHu
Copy link
Member

Then report the results without interrupted training

@TomNong
Copy link
Collaborator Author

TomNong commented Aug 5, 2019

Got it.

@@ -42,7 +42,7 @@ Here `--model` specifies the saved model checkpoint, which is saved in `./models

|Dataset |Metrics | VAE-LSTM |VAE-Transformer |
|---------------|-------------|----------------|------------------------|
|Yahoo | Test PPL<br>Test NLL | 75.21<br>336.41 |67.81<br>328.34|
|Yahoo | Test PPL<br>Test NLL | 69.42<br>338.65 |67.81<br>328.34|
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the Transformer results? Are those still running?

@TomNong
Copy link
Collaborator Author

TomNong commented Aug 8, 2019

This PR is not yet ready for review.

@TomNong
Copy link
Collaborator Author

TomNong commented Aug 12, 2019

This PR is ready for review now.

@huzecong
Copy link
Collaborator

Results now look reasonable to me. Let's merge this.
After we merge this, can you take a look at the Executor module and think about how we can migrate this example to using Executors?

@huzecong huzecong merged commit 0261638 into asyml:master Aug 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants