You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2. Prefix Tuning: [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.353/), [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.org/pdf/2110.07602.pdf)
But how can I switch to P-Tuning v2?
The text was updated successfully, but these errors were encountered:
Hello, those are implemented together. P-Tuning v2 introduced optional parameterization of prompt tokens which you can specify via prefix_projection of PrefixTuningConfig. The other contribution was the ability of work without verbalizers using the linear classification head for NLU tasks whereas Prefix-Tuning paper which focused on NLG didn't focus on this.
So, they are supported via the same PrefixEncoder PEFT method
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
We can find the
P-Tuning v2
inpeft/README.md
Line 29 in 8af8dbd
But how can I switch to
P-Tuning v2
?The text was updated successfully, but these errors were encountered: