Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

include zeroth-level embeddings in extract_features #6

Closed
jonathanbratt opened this issue Sep 7, 2019 · 2 comments
Closed

include zeroth-level embeddings in extract_features #6

jonathanbratt opened this issue Sep 7, 2019 · 2 comments
Assignees

Comments

@jonathanbratt
Copy link
Owner

For completeness, would be good to return the bare token embeddings before any transformer layers along with the layer outputs.

@jonathanbratt jonathanbratt self-assigned this Sep 7, 2019
@ghost ghost closed this as completed Sep 9, 2019
@leungi
Copy link

leungi commented Oct 1, 2019

If I understand this post correctly, in most BERT-related articles, when 12 layers are mentioned for the uncased model, this corresponds to layer_output_1 to layer_output_12 in output from RBERT::extract_features(), ya?

@jonathanbratt
Copy link
Owner Author

That is correct. When you see layer_output_0 in RBERT output, that corresponds to the vectors input to the first transformer layer.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants