Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We’re Afraid Language Models Aren’t Modeling Ambiguity, Liu+ (w/ Noah A. Smith), University of Washington, arXiv'23 #570

Open
AkihikoWatanabe opened this issue Apr 28, 2023 · 1 comment
Labels

Comments

@AkihikoWatanabe
Copy link
Owner

https://arxiv.org/abs/2304.14399

@AkihikoWatanabe AkihikoWatanabe changed the title We’re Afraid Language Models Aren’t Modeling Ambiguity We’re Afraid Language Models Aren’t Modeling Ambiguity, Liu+ (w/ Noah A. Smith), University of Washington, arXiv'23 Apr 28, 2023
@AkihikoWatanabe
Copy link
Owner Author

LLMが曖昧性をどれだけ認知できるかを評価した初めての研究。
言語学者がアノテーションした1,645サンプルの様々な曖昧さを含んだベンチマークデータを利用。
GPT4は32%正解した。
またNLIデータでfinetuningしたモデルでは72.5%のmacroF1値を達成。
応用先として、誤解を招く可能性のある政治的主張に対してアラートをあげることなどを挙げている。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant