New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
two bugs #5
Comments
Hi, Thanks for point that out. The in_attn parameter is used for an experiment that changes the architecture in the encoder. However, the attempt doesn't succeed, so we don't use it for the paper. I've deleted the parameter and also the different way to call self.transformer_encoder. Please pull the new version, thanks! |
Thanks for your reply, but The function self.transformer_encoder(pos_emb, attn_mask ) in workspace/transformer/models.py still has a bug, this is "h, layer_outputs = self.transformer_encoder(pos_emb, attn_mask) # y: b x s x d_model which means that self.transformer_encoder() has only one output instead of two. and What is the version of your fast_transformers? |
The additional output also comes from my modified fast-transformer, so I already deleted it, please pull the newest version again. Thanks for letting me know and apologized for the inconvenience! |
Hello, excuse me, I am very interested in this job, but I find two bugs when running the code.
The text was updated successfully, but these errors were encountered: