Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About optionally diversifying prompt-selection #9

Closed
Dicer-Zz opened this issue Jun 1, 2022 · 2 comments
Closed

About optionally diversifying prompt-selection #9

Dicer-Zz opened this issue Jun 1, 2022 · 2 comments

Comments

@Dicer-Zz
Copy link

Dicer-Zz commented Jun 1, 2022

Thanks for the great idea and the result!

As the title says, I'd like to know how to use optionally diversifying prompt-selection, I don't see where to use the arguments for this method, nor do I see an implementation of it in . /models/prompt.py

I would like to ask about how to normalize the frequency of each prompt into a penalty factor, I don't see a specific description in the paper.

@KingSpencer
Copy link
Collaborator

Hi thanks for your question. Actually in this repo, we use a simpler yet as effective version. The optional argument to control the diversified prompt selection is config.use_prompt_mask. If set True, at training time, only disjoint sets of prompts are trained for each task so that we kind of force the model to use different prompts.

@GengDavid
Copy link

@KingSpencer
I have the same question as @Dicer-Zz. You mentioned the option "use_prompt_mask" to use "disjoint sets of prompts " for each task.
However, since there are only 10 prompts in the pool, top-k*num_tasks is larger than the number of prompts. I think this option cannot have the same effect (i.e., "diversify" the selection) as the paper said.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants