You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the range for scores computed by both methods? I saw in another post that, evaluation score by SummaC-ZS is from [-1, 1] or [0, 1] depending on the argument of use_con. How about the range of SummaC-Cov? What is the meaning of score values? Are they the probability of getting summary from the reference document?
The text was updated successfully, but these errors were encountered:
I think output of SummaC-Cov is probability of binary classification model from 0 to 1.
I kinds of tells us the chances of summary being inconsistent with the document provided.
I believe the authors suggest we used balanced accuracy scores https://scikit-learn.org/1.5/modules/generated/sklearn.metrics.balanced_accuracy_score.html for all our datapoints to determine the overall accuracy of the model, instead of looking at a particular data point in isolation.
What is the range for scores computed by both methods? I saw in another post that, evaluation score by SummaC-ZS is from [-1, 1] or [0, 1] depending on the argument of use_con. How about the range of SummaC-Cov? What is the meaning of score values? Are they the probability of getting summary from the reference document?
The text was updated successfully, but these errors were encountered: