Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving Update Summarization via Supervised ILP and Sentence Reranking, Li et al. NAACL’15 #31

Open
AkihikoWatanabe opened this issue Dec 28, 2017 · 1 comment

Comments

@AkihikoWatanabe
Copy link
Owner

http://www.aclweb.org/anthology/N15-1145

@AkihikoWatanabe
Copy link
Owner Author

・update summarizationをILPで定式化.基本的なMDSのILPのterm weightingにsalienceの要素に加えてnoveltyの要素を加える.term weightingにはbigramを用いる.bigram使うとよくなることがupdate summarizationだと知られている.weightingは平均化パーセプトロンで学習
・ILPでcandidate sentencesを求めたあと,それらをSVRを用いてRerankingする.SVRのloss functionはROUGE-2を使う.
・Rerankingで使うfeatureはterm weightingした時のsentenceレベルのfeatureを使う.
・RerankingをするとROUGE-2スコアが改善する.2010, 2011のTAC Bestと同等,あるいはそれを上回る結果.novelty featureを入れると改善.
・noveltyのfeatureは,以下の通り.

Bigram Level
 -bigramのold datasetにおけるDF
 -bigram novelty value (new datasetのbigramのDFをold datasetのDFとDFの最大値の和で割ったもの)
 -bigram uniqueness value (old dataset内で出たbigramは0, すでなければ,new dataset内のDFをDFの最大値で割ったもの)
Sentence Level
 -old datasetのsummaryとのsentence similarity interpolated n-gram novelty (n-gramのnovelty valueをinterpolateしたもの)
 -interpolated n-gram uniqueness (n-gramのuniqueness valueをinterpolateしたもの)

・TAC 2011の評価の値をみると,Wanらの手法よりかなり高いROUGE-2スコアを得ている.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant