Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

control code is not used in PrefixTuning.get_prompt() #41

Open
XinyuGuan01 opened this issue Jun 17, 2022 · 3 comments
Open

control code is not used in PrefixTuning.get_prompt() #41

XinyuGuan01 opened this issue Jun 17, 2022 · 3 comments

Comments

@XinyuGuan01
Copy link

Hi, thanks for sharing the codes.

I have tried the webnlg task and data2text task with the 'cleaned' branch. But I found that the "control_code" argument is not used in all the implementations of PrefixTuning.get_prompt(). Does this mean that different categories of webnlg dataset will use the same soft prompt? I found that there are get_prompt_p3, get_prompt_p1 and get_prompt_p4 in the master branch. Can I use them to reproduce the results of the paper?

Thanks.

@XiangLi1999
Copy link
Owner

Thanks for the question. You are right! different categories of WebNLG uses the same soft prompt.

@XiangLi1999
Copy link
Owner

I think the current branch has all the setups to reproduce the results, but feel free to checkout the old branch if you are interested. I think those get_prompt_p1... are preliminary experiments.

@XinyuGuan01
Copy link
Author

Thanks for the question. You are right! different categories of WebNLG uses the same soft prompt.

Thank you for your answer. Did this mean that the category information was not used in the proposed method?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants