Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPT3.5 ToT Performance is a lot lower #24

Closed
IsThatYou opened this issue Jun 9, 2023 · 5 comments
Closed

GPT3.5 ToT Performance is a lot lower #24

IsThatYou opened this issue Jun 9, 2023 · 5 comments

Comments

@IsThatYou
Copy link

Hi! I tried to use GPT-3.5-turbo for the ToT experiment on Game24 and got similar results except for ToT. For both standard prompting and CoT I got an answer close to what's in the paper: (IO: 36%, CoT:42%). But for ToT, without changing the script I can only get 4% as opposed to 45% in the paper. I am wondering have you guys seen similar behaviors from GPT-3.5? What may potentially cause this?

One quick glance over what's generated suggests that GPT3.5 is not as good at following the format. But the huge discrepancy is interesting.

Thanks!

@GithungDang
Copy link

Does this mean that using open source models like vicuna will be worse?

@GithungDang
Copy link

Actually, I want to use ToT to improve the reasoning ability of open source models, so that they can be close to the reasoning level of gpt3.5, rather than the superficial dialogue style.

@ysymyth
Copy link
Member

ysymyth commented Jun 26, 2023

Hi @IsThatYou this is a great point --- I tried GPT-3.5 and it indeeds performs badly on game of 24. Note that IO: 36% CoT:42% are pass@100 though.

We also tried ToT using GPT-3.5-turbo instead of GPT-4 on Creative Writing (scoring is still via GPT-4). We find all methods perform worse, but ToT is still significantly better than other methods.

Creative Writing GPT-4 (in paper) GPT-3.5-turbo
IO 6.19 4.47
CoT 6.93 5.16
ToT 6.93 6.62

In general, I believe proposing and evaluating diverse thoughts is an "emerging capability" that is hard even for GPT-4, but significantly harder for smaller/weaker models. It would be important and interesting to study how to make smaller models better at ToT reasoning!

@ysymyth ysymyth closed this as completed Jun 26, 2023
@IsThatYou
Copy link
Author

Hi @ysymyth thank you for the response! I closely looked and compared some of the generations between gpt-3.5 and gpt-4, I found gpt-4 to be better at task understanding in general. gpt-3.5 degenerates more often than gpt-4. Anyway, this is pretty interesting. It is definitely interesting to see how to make smaller models better at them. :D

@ysymyth
Copy link
Member

ysymyth commented Jun 26, 2023

Yes I agree, and perhaps some better prompt engineering can help with the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants