-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rouge score accuracy #2
Comments
It has been improved with #6 I compared two scoring, on multi-sentence files with 10397 lines, 508630 words, and i get:
|
Maybe the difference is caused by Line 92 in 8255cac
split by '.' will remove all '.' in hyp and ref.
|
@shijx12 It's not the only reason, but you've got a good point, that code does not make sense. I'm editing it and evaluating the impact. Thanks for pointing this out. |
Hi @pltrdy , Could you run some evaluation to compare the differences between the perl script and yours ? How much does it differ ? I would love to get rid off the perl script ! https://github.com/RxNLP/ROUGE-2.0 seems to have identical scores (besides a +1 as smoothing they did not implement because not indication was present in the official ROUGE script) |
@Diego999 that's precisely what I did here: #2 (comment). |
@pltrdy yes but that was in February, some modifications have been done since ;) Especially the remark of #2 (comment) . Did you re-conduct experiments since ? |
It must be similar if not exactly the same. I'm not sure how is the punctuation handled in the official script. I've attempted some fixes which seems to be worse. It may just be ignored, therefore naïve implementation may be the right one. |
Ok, thank you for your answer ! |
is it documented somewhere showing that ROUGE-2.0 has identical scores? |
@AlJohri Yes last paragraph of their paper |
By the way, I solved this problem here: https://github.com/Diego999/py-rouge Have a look at the README to understand when the results are different at ~4e-5 sometime |
that's great to hear @Diego999! are you planning on releasing this as an independent package or merging it back into pltrdy/rouge? |
It’s already done :)
…--
Diego Antognini
From: Al Johri <notifications@github.com> <notifications@github.com>
Reply: pltrdy/rouge
<reply@reply.github.com>
<reply@reply.github.com>
Date: 19 September 2018 at 23:38:33
To: pltrdy/rouge <rouge@noreply.github.com> <rouge@noreply.github.com>
CC: Diego Antognini <diegoantognini@gmail.com> <diegoantognini@gmail.com>,
Mention <mention@noreply.github.com> <mention@noreply.github.com>
Subject: Re: [pltrdy/rouge] Rouge score accuracy (#2)
that's great to hear @Diego999 <https://github.com/Diego999>! are you
planning on releasing this as an independent package or merging it back
into @pltrdy/rouge?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABCrcNTRMkHmTKtHrSOkyJ2eGwIdeXZjks5ucrlZgaJpZM4MlcA5>
.
|
The results are known to be quite different from official ROUGE scoring script.
It has been discussed here:
google/seq2seq#89
The text was updated successfully, but these errors were encountered: