Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dynamic_decode and Fix decoder issue #208

Merged
merged 3 commits into from
Sep 21, 2019

Conversation

gpengzhi
Copy link
Collaborator

Resolve #199

@gpengzhi gpengzhi force-pushed the decoder-issue branch 2 times, most recently from 54427fc to 51b37dc Compare September 13, 2019 14:28
Copy link
Member

@ZhitingHu ZhitingHu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

beam_search_decode relying on BeamSearchDecoder is not compatible to the new dynamic_decode. Can you update using beam_search in texar.utils, and add to beam_search the mode of standard beam search (make sure the same results with BeamSearchDecoder). Then beam_search_decode should be deleted.

This PR would make huge impact. Let's make sure it's bug-free, by reproducing example results in at least transformer/, seq2seq_atten, and text_style_transfer.

Copy link
Member

@ZhitingHu ZhitingHu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls merge once the comments are fixed

texar/tf/utils/dynamic_decode.py Outdated Show resolved Hide resolved
texar/tf/utils/dynamic_decode.py Outdated Show resolved Hide resolved
@gpengzhi gpengzhi merged commit b71ad08 into asyml:master Sep 21, 2019
@gpengzhi gpengzhi deleted the decoder-issue branch October 31, 2019 17:49
@@ -41,6 +42,8 @@ def _merge_beam_dim(tensor):
Returns:
Reshaped tensor of shape [A*B, ...]
"""
if not isinstance(tensor, tf.Tensor):
return tensor
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why directly return tensor here instead of converting it into a tf.Tensor and continue the subsequent operations?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we implement beam search decoding in AttentionRNNDecoder by using this function, state used in this function is AttentionWrapperState which contains int type attribute time. This check is used to only reshape the correct attributes in state. We also have similar checking in texar-pytorch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for explaining. Can you add brief comments to make the code more readable? E.g., "if tensor is not tf.Tensor, then tensor is xxx, return directly".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Helper issue in seq2seq_exposure_bias
2 participants