You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would it be possible to document or provide the process for pre-training ByteQRNN on multilingual data? Additionally, are there any plans to release model checkpoints?
I read the blog post here which mentions that pre-trained and finetuned ByteQRNN models were evaluated against BERT on the civil_comments dataset and that the models were pre-trained using multilingual data. I would be interested in trying to reproduce this, as well as using the pre-trained models for transfer learning on my own multilingual data (classification + NER).
3. Additional context
N/A
4. Are you willing to contribute it? (Yes or No)
Yes
The text was updated successfully, but these errors were encountered:
Prerequisites
Please answer the following question for yourself before submitting an issue.
1. The entire URL of the file you are using
https://github.com/tensorflow/models/tree/master/research/seq_flow_lite
2. Describe the feature you request
Would it be possible to document or provide the process for pre-training ByteQRNN on multilingual data? Additionally, are there any plans to release model checkpoints?
I read the blog post here which mentions that pre-trained and finetuned ByteQRNN models were evaluated against BERT on the civil_comments dataset and that the models were pre-trained using multilingual data. I would be interested in trying to reproduce this, as well as using the pre-trained models for transfer learning on my own multilingual data (classification + NER).
3. Additional context
N/A
4. Are you willing to contribute it? (Yes or No)
Yes
The text was updated successfully, but these errors were encountered: