Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing normalization based on phrase length? #7

Open
akshaylive opened this issue Nov 28, 2022 · 1 comment
Open

Missing normalization based on phrase length? #7

akshaylive opened this issue Nov 28, 2022 · 1 comment

Comments

@akshaylive
Copy link

According to the paper (section 2.2), constituent word representations are taken to be the average of token representations of the phrase (non-terminal) tokens.

The code actually does a batch matrix multiplication, and therefore achieves the sum of hidden token representations. This may affects both the magnitude and the direction of the phrase level representation after applying the activation.

Am I missing something?

@akshaylive
Copy link
Author

I just realized that this issue has been asked before here. It would be good to add this to a disclaimer on README.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant