You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the most successful prompting techniques is Tree of Thought - ToT and Chain of Thought - CoT.
This paper also suggests some other, easily implementable prompting techniques.
This is effectively "run N generators in parallel"/"retry from nothing" and "retry with feedback from what the go-compile and go-test prints".
You could easily do this with a test-execution-context variable and some modified error handling.
This is probably mostly relevant for rather weak models, but also to do a comparison with cost in mind, i.e. gpt4-32k costs 30€/MegaInToken and 60€/MegaOutToken and only 0.5€/MegaToken for gpt3.5, so you can effectively do 20-shot gpt3.5 instead of gpt4.
The text was updated successfully, but these errors were encountered:
One of the most successful prompting techniques is Tree of Thought - ToT and Chain of Thought - CoT.
This paper also suggests some other, easily implementable prompting techniques.
This is effectively "run N generators in parallel"/"retry from nothing" and "retry with feedback from what the go-compile and go-test prints".
You could easily do this with a test-execution-context variable and some modified error handling.
This is probably mostly relevant for rather weak models, but also to do a comparison with cost in mind, i.e. gpt4-32k costs 30€/MegaInToken and 60€/MegaOutToken and only 0.5€/MegaToken for gpt3.5, so you can effectively do 20-shot gpt3.5 instead of gpt4.
The text was updated successfully, but these errors were encountered: