You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to reproduce BertScore for Table 2, block 2 in you paper. Where you evaluate different metrics against one reference.
I assume that you are using the first reference in the list of references that you provide in the processed files.
You can see that the model_type is set to 'bert-base-multilingual-cased', allowing the use of the same exact model for all the (multilingual) datasets.
Hello,
Thank you for the great work,
I am trying to reproduce BertScore for Table 2, block 2 in you paper. Where you evaluate different metrics against one reference.
I assume that you are using the first reference in the list of references that you provide in the processed files.
So my code looks like that for example:
Then I'm running bertscore after saving the references and the candidates in files using:
I trained with\without
--idf
and with bothbert-base-uncased
androberta-large
.In all cases I'm obtaining values different than those in the json file that I load as follows:
Can you tell me please what are the exact options that you use to compute bertscore?
The text was updated successfully, but these errors were encountered: