You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question regarding the generation of outputs in the data/labeled files. Specifically, I'm curious about the parameters and prompts you used during this process. I've noticed that my generated text (e.g. from ChatGPT) is much longer than the content in your file. Could you please provide information on the settings you employed, such as temperature, max_tokens, and prompts, when generating the biographies? Your assistance in this matter would be greatly appreciated.
Thank you in advance!
The text was updated successfully, but these errors were encountered:
Hi @caiqizh, thank you for your interest in our work.
Here is the prompt we used for ChatGPT:
Here are two hyperparameters:
temp=0.7 for both ChatGPT and InstructGPT
max_tokens=512 for InstructGPT and max_tokens=1024 for ChatGPT
Using different max_tokens should not affect the generations unless the generation exceed max_tokens , which never happened in our case. Given this, I think it is possible that you are seeing much longer responses due to the internal changes in ChatGPT (if it's not due to the difference in the prompt).
Let me know if you have any further questions. Thanks.
Thank you for the excellent work!
I have a question regarding the generation of outputs in the
data/labeled
files. Specifically, I'm curious about the parameters and prompts you used during this process. I've noticed that my generated text (e.g. from ChatGPT) is much longer than the content in your file. Could you please provide information on the settings you employed, such as temperature, max_tokens, and prompts, when generating the biographies? Your assistance in this matter would be greatly appreciated.Thank you in advance!
The text was updated successfully, but these errors were encountered: