-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactored prompt.py to reduce token usage #1996
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please tell, on what version did you encounter getting stuck in a loop? There is a fix merged on main just these days for that kind of issue, although I suspect it may still be missing commands because of different command ids. Do you by any chance have logs from that issue?
IMHO some edits are a great idea, but I hope @xingyaoww can take a look when we consider this kind of changes.
I encountered it on the opendevin:main docker image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefe to not change the prompts only if we can do experiments to show it do works.
I agree with @yufansong , this seems like a great change if it doesn't reduce our accuracy, but I am worried that it might. Maybe we could run swe bench lite with this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CC @xingyaoww
Most likely a lot of these extra tokens are required for quality--the LLM often needs some reminding.
But there are also a few typo fixes etc here so we might want to take some of it.
Yep! I agree with @yufansong @neubig @rbren -- These typo changes are actually good. How about we wait until #1941 is merged (since it also changes a bunch of prompt and bump the version to v1.5). Then we merge all the typo fixes here -- so we can just do ONE pass eval of SWE-Bench lite to make sure we are not degrading performance? |
Is there anything else that I can do, or do I just wait for #1941 to be merged? |
@temotskipa I think that PR is merged :) feel free to re-adjust the prompt! |
I mean I personally think this prompt is fine, but IDK if the maintainers also think so. |
@temotskipa Can you resolve those merge conflict so we can review again and try to merge it? |
@temotskipa are you interested in pushing this one forward? @xingyaoww do you have specific changes you'd like to see? |
I suggest to hold any prompts change before we finishing all benchmark evaluation. |
I am mainly looking for changes that fix the conflicts - once those are fixed, I'm happy with merging this PR (after 2 days - when we finish running all the eval for the paper as yufan suggested) |
So am I supposed to revert the changes and just fix any typos and things like that? It's unclear to me what is conflicting here. |
@temotskipa Sorry for getting back late! We were running for a huge deadline :( |
* Add files via upload * Update README.md * Update run_infer.py * Update utils.py * make lint * Update evaluation/toolqa/run_infer.py --------- Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> Co-authored-by: yufansong <yufan@risingwave-labs.com> Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk>
* revert change in file action * remove useless code * make lint
* feat: add gpqa benchmark evaluation * add metrics * reset configs in final block * make lint --------- Co-authored-by: yufansong <yufan@risingwave-labs.com>
…nDevin#2329) * remove bottom chatbox fade * Modal wider; fix lint error * settings: attempt to not clear api key for same provider * prevent api key from resetting after changing the model * revert other changes and fix post test tear down error --------- Co-authored-by: amanape <83104063+amanape@users.noreply.github.com>
…uck OpenDevin#1895] (OpenDevin#2034) * fix: codeact bug OpenDevin#1895 * fix: add CmdRunAction timeout hint. * Update agenthub/codeact_agent/prompt.py Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> * regenerate integration test --------- Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> Co-authored-by: Graham Neubig <neubig@gmail.com> Co-authored-by: yufansong <yufan@risingwave-labs.com>
* removed unused files from gorilla * Update run_infer.py, removed unused imports * Update utils.py * Update ast_eval_hf.py * Update ast_eval_tf.py * Update ast_eval_th.py * Create README.md * Update run_infer.py * make lint * Update run_infer.py * fix lint --------- Co-authored-by: yufansong <yufan@risingwave-labs.com>
I have a PR #2326 which also tweaks the prompt a bit. Follow-ups could be included in that PR.
Refactored prompt.py to reduce token usage. Also included a band-aid fix to resolve an issue Devin encountered while editing prompt.py. The issue in question was that random example commands included in prompt.py were getting executed in the terminal, which resulted in the agent getting stuck in a loop trying to edit the file unsuccessfully. This issue should be resolved in a timely manner with a proper fix imo.