Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Power of Scale for Parameter-Efficient Prompt Tuning, Lester+, Google Research, EMNLP‘21 #473

Open
AkihikoWatanabe opened this issue Aug 19, 2022 · 1 comment

Comments

@AkihikoWatanabe
Copy link
Owner

https://arxiv.org/abs/2104.08691

@AkihikoWatanabe
Copy link
Owner Author

AkihikoWatanabe commented Aug 19, 2022

日本語解説: https://qiita.com/kts_plea/items/79ffbef685d362a7b6ce

T5のような大規模言語モデルに対してfinetuningをかける際に、大規模言語モデルのパラメータは凍結し、promptをembeddingするパラメータを独立して学習する手法

言語モデルのパラメータ数が増加するにつれ、言語モデルそのものをfinetuningした場合(Model Tuning)と同等の性能を示した。

@AkihikoWatanabe AkihikoWatanabe changed the title The Power of Scale for Parameter-Efficient Prompt Tuning, Lester+, Google Research, 2021 The Power of Scale for Parameter-Efficient Prompt Tuning, Lester+, Google Research, EMNLP‘21 Aug 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant