Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating an instance of BasicDecoder fails with AttentionMechanism without memory #511

Closed
kazemnejad opened this issue Sep 15, 2019 · 2 comments · Fixed by #513
Closed
Labels
bug Something isn't working seq2seq

Comments

@kazemnejad
Copy link
Contributor

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab
  • TensorFlow version and how it was installed (source or binary): 2.0.0-dev20190914
  • TensorFlow-Addons version and how it was installed (source or binary): 0.6.0-dev
  • Python version: 3.6
  • Is GPU used? (yes/no): Yes

Describe the bug

When creating a BasicDecoder using an AttentionWrapper cell which itself is created by using an AttentionMechinsm without a memory, an error is raised.

Code to reproduce the issue

units = 32
vocab_size = 1000
attention_mechanism = tfa.seq2seq.LuongAttention(units)
cell = tf.keras.layers.LSTMCell(units)
attention_wrapper = tfa.seq2seq.AttentionWrapper(
    cell, attention_mechanism)

vocab_proj_layer = tf.keras.layers.Dense(vocab_size)
decoder_sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder = tfa.seq2seq.BasicDecoder(
    cell=attention_wrapper, 
    sampler=decoder_sampler, 
    output_layer=vocab_proj_layer)

Other info / logs

ValueError: The AttentionMechanism instances passed to this AttentionWrapper should be initialized with a memory first, either by passing it to the AttentionMechanism constructor or calling attention_mechanism.setup_memory()

Full trace

The cause of this error is probably the rnn_cell_impl.assert_like_rnncell("cell", cell) check which is present in the BasicDecoder's constructor. The above assertion will end up in AttentionWrapper.output_size or AttentionWrapper.state_size.

This issue is probably related to #461

I encountered this issue when i was working on #335.

@guillaumekln guillaumekln added bug Something isn't working seq2seq labels Sep 15, 2019
@guillaumekln
Copy link
Contributor

Thanks for the report!

It appears assert_like_rnncell reimplements its own hasattr in terms of getattr: tensorflow/tensorflow@d70e8ee. This is certainly to support cells overriding __getattribute__ but it's unclear how common this use case is.

As it is a private TensorFlow API, I would suggest to implement an alternative that works for us. What do you think?

@kazemnejad
Copy link
Contributor Author

Yes, I think it would work completely fine. However, it would restrict the type of RNNCells that we could potentially accept. So probably we need to document it somewhere in our code to inform the future users.
In addition, Maybe we should warn the user to not use our .output_size/.state_size before the memory initialization. And even double-check our code to make sure that we wait until the initialization of the memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working seq2seq
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants