Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

Commit

Permalink
Deprecate AttentionLM in favor of Transformer
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 208913126
  • Loading branch information
Ryan Sepassi authored and Copybara-Service committed Aug 16, 2018
1 parent 6c8bec5 commit a705063
Showing 1 changed file with 6 additions and 0 deletions.
6 changes: 6 additions & 0 deletions tensor2tensor/models/research/attention_lm.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
# limitations under the License.
"""Self-attention based language model.
DEPRECATED. Use Transformer which supports running the decoder only.
Like transformer.py, but no encoder
decoder: [Self-Attention, Feed-forward] x n
Expand All @@ -34,6 +36,10 @@
import tensorflow as tf


@tf.contrib.framework.deprecated(
"2018-09-15",
"Use Transformer, which supports decoder-only mode when "
"Transformer.has_input=False.")
@registry.register_model
class AttentionLM(t2t_model.T2TModel):
"""Attention net. See file docstring."""
Expand Down

0 comments on commit a705063

Please sign in to comment.