You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently trying to reproduce the results reported in Table 3. of the research paper. So far, I have tried to use "bert-base-multilingual-cased/uncased" as the pre-trained model for the French dataset(9a). However, the results on the test dataset do not match with those reported in the paper. I am not sure if I am using the correct pre-trained model and python packages. A few package versions in requirements.txt seem outdated and cause the training/inference script to break.
Could you please provide some guidance on how I should go about this?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Yes, I have finetuned the bert base uncased from Hugging Face. The results obtained on the French dataset (training size=16) are as follows:
Test f1 = 0.43418500716755204
Test acc = 0.5491803278688525
These values are quite different from the results reported in the paper.
I am currently trying to reproduce the results reported in Table 3. of the research paper. So far, I have tried to use "bert-base-multilingual-cased/uncased" as the pre-trained model for the French dataset(9a). However, the results on the test dataset do not match with those reported in the paper. I am not sure if I am using the correct pre-trained model and python packages. A few package versions in requirements.txt seem outdated and cause the training/inference script to break.
Could you please provide some guidance on how I should go about this?
Thanks in advance.
The text was updated successfully, but these errors were encountered: