This repository was archived by the owner on Jul 7, 2023. It is now read-only.

Description
Description
I want to create a sentence embedding by adding up context aware word embeddings and normalizing the sum. Which point (layer) in the transformer should I extract the context aware word embeddings from and how does one go about doing that in tensor2tensor?
I have trained a transformer_base model to decode an email response based on input email.
...