Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train with Coverage? #56

Closed
XuemingQiu opened this issue Jan 14, 2021 · 1 comment
Closed

How to train with Coverage? #56

XuemingQiu opened this issue Jan 14, 2021 · 1 comment

Comments

@XuemingQiu
Copy link

when i have finished the 500k iters training with PG ,which makes the Couvergae is False. The Rouge Score is higher than you provided!
`

ROUGE-1:
rouge_1_f_score: 0.3682 with confidence interval (0.3660, 0.3704)
rouge_1_recall: 0.5194 with confidence interval (0.5166, 0.5225)
rouge_1_precision: 0.2983 with confidence interval (0.2961, 0.3004)

ROUGE-2:
rouge_2_f_score: 0.1588 with confidence interval (0.1568, 0.1608)
rouge_2_recall: 0.2248 with confidence interval (0.2220, 0.2277)
rouge_2_precision: 0.1286 with confidence interval (0.1268, 0.1304)

ROUGE-l:
rouge_l_f_score: 0.3390 with confidence interval (0.3369, 0.3412)
rouge_l_recall: 0.4786 with confidence interval (0.4758, 0.4814)
rouge_l_precision: 0.2746 with confidence interval (0.2724, 0.2766)
But the I continuing training from PG last time model with Coverage = True , and train with 25k , why rouge score is so low? ----------------------------------------
ROUGE-1:
rouge_1_f_score: 0.3670 with confidence interval (0.3650, 0.3689)
rouge_1_recall: 0.5149 with confidence interval (0.5123, 0.5174)
rouge_1_precision: 0.2983 with confidence interval (0.2962, 0.3004)

ROUGE-2:
rouge_2_f_score: 0.1510 with confidence interval (0.1490, 0.1528)
rouge_2_recall: 0.2120 with confidence interval (0.2093, 0.2146)
rouge_2_precision: 0.1229 with confidence interval (0.1213, 0.1246)

ROUGE-l:
rouge_l_f_score: 0.3370 with confidence interval (0.3350, 0.3388)
rouge_l_recall: 0.4729 with confidence interval (0.4704, 0.4754)
rouge_l_precision: 0.2738 with confidence interval (0.2718, 0.2758)
****************************************`

May i some thing wrong to train with coverage?
I check the output , the covergae model output with no repeate, the pg has? but why rouge score is so low?

@Wenjun-Peng
Copy link

Do you figure out what cause it? I trained the model for 500k iters with coverage=Fasle, Then truned off coverage for 200k iters, and got results:
{'rouge1_fmeasure': tensor(0.3744),
'rouge1_precision': tensor(0.3811),
'rouge1_recall': tensor(0.3919),

'rouge2_fmeasure': tensor(0.1642),
'rouge2_precision': tensor(0.1683),
'rouge2_recall': tensor(0.1711),

'rougeL_fmeasure': tensor(0.2603),
'rougeL_precision': tensor(0.2646),
'rougeL_recall': tensor(0.2729),

'rougeLsum_fmeasure': tensor(0.2603),
'rougeLsum_precision': tensor(0.2646),
'rougeLsum_recall': tensor(0.2729)}

by the way, I use torchmetrics for measure results instead of pyrouge, but I don't think it will cause inconsistant results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants