-
Notifications
You must be signed in to change notification settings - Fork 28.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tabert #5530
Comments
In their readme they say that the implementation to this project is still WIP, but they said this on release. |
Looking forward to this new model. |
Hello all! I looked at the list of models in the transformers site and TaBERT is still not listed. Does anyone know when it is going to be ready? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
🌟 New model addition
Model description
a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
Open source status
https://github.com/facebookresearch/TaBERT
https://github.com/facebookresearch/TaBERT
The text was updated successfully, but these errors were encountered: