New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract openai API calls and retry at lowest level #3696
Extract openai API calls and retry at lowest level #3696
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size |
…k/Auto-GPT into refactor/extract-openai-api-calls
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #3696 +/- ##
==========================================
+ Coverage 70.74% 71.13% +0.38%
==========================================
Files 72 72
Lines 3504 3509 +5
Branches 556 556
==========================================
+ Hits 2479 2496 +17
+ Misses 856 843 -13
- Partials 169 170 +1
☔ View full report in Codecov by Sentry. |
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size |
This is a mass message from the AutoGPT core team. For more details (and for infor on joining our Discord), please refer to: |
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size |
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
James, @collijk - I know you've got your head down. Is this worth trying to put in 0.3.2? Is anyone else available to resolve conflicts? |
Seems to me, this could still be valuable even after the re-arch. Marking for maintainer-review |
Deployment failed with the following error:
|
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly. |
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
You changed AutoGPT's behaviour. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged. |
Extract openai API calls and retry at lowest level (Significant-Gravitas#3696) * Extract open ai api calls and retry at lowest level * Forgot a test * Gotta fix my local docker config so I can let pre-commit hooks run, ugh * fix: merge artiface * Fix linting * Update memory.vector.utils * feat: make sure resp exists * fix: raise error message if created * feat: rename file * fix: partial test fix * fix: update comments * fix: linting * fix: remove broken test * fix: require a model to exist * fix: BaseError issue * fix: runtime error * Fix mock response in test_make_agent * add 429 as errors to retry --------- Co-authored-by: k-boikov <64261260+k-boikov@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nick@ntindle.com> Co-authored-by: Reinier van der Leer <github@pwuts.nl> Co-authored-by: Nicholas Tindle <nicktindle@outlook.com> Co-authored-by: Luke K (pr-0f3t) <2609441+lc0rp@users.noreply.github.com> Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com> Add 16k gpt model Refactor module layout of command classes Remove nonessential commands Basic working version of functions support Make required args required Rewrite write_file to work with new arguments Replace gpt-3.5-turbo by gpt-3.5-turbo-0613 gpt-3.5-turbo-16k-0613 everywhere models that are not 0613 are gone Refactor message history and cycling Ask users to update their models Fix issues with token counting Fix issues with history trimming Remove self feedback and rename user_input to triggering_prompt when necessary Fix json parsing issues Document the steps of the interaction loop Fix issues with parsing reply Refactor MessageCycle Up word limit in prompt Change command prompt constraint to reference functions Fix {} arguments error Fix issues with command arguments Fix issues with function arguments attached to assistant responses Evolve prompts to improve AI understand of function calls Fix issues with command result serialization Fix read_file path bug Fix a bunch of tests Abandon function memory role for now Fix history and summarization issues and its tests Change task_complete so the AI doesn't call it accidentally Remove agent manager as it is unused and now broken Fix token encoding model edge cases Fix openai provider tests Update setup test Cleanup imports Trigger CI Fix LogCycle and stringify commands Prompt change Reduce diff size due to moving functions Include missing test
Extract openai API calls and retry at lowest level (#3696) * Extract open ai api calls and retry at lowest level * Forgot a test * Gotta fix my local docker config so I can let pre-commit hooks run, ugh * fix: merge artiface * Fix linting * Update memory.vector.utils * feat: make sure resp exists * fix: raise error message if created * feat: rename file * fix: partial test fix * fix: update comments * fix: linting * fix: remove broken test * fix: require a model to exist * fix: BaseError issue * fix: runtime error * Fix mock response in test_make_agent * add 429 as errors to retry --------- Co-authored-by: k-boikov <64261260+k-boikov@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nick@ntindle.com> Co-authored-by: Reinier van der Leer <github@pwuts.nl> Co-authored-by: Nicholas Tindle <nicktindle@outlook.com> Co-authored-by: Luke K (pr-0f3t) <2609441+lc0rp@users.noreply.github.com> Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com> Add 16k gpt model Refactor module layout of command classes Remove nonessential commands Basic working version of functions support Make required args required Rewrite write_file to work with new arguments Replace gpt-3.5-turbo by gpt-3.5-turbo-0613 gpt-3.5-turbo-16k-0613 everywhere models that are not 0613 are gone Refactor message history and cycling Ask users to update their models Fix issues with token counting Fix issues with history trimming Remove self feedback and rename user_input to triggering_prompt when necessary Fix json parsing issues Document the steps of the interaction loop Fix issues with parsing reply Refactor MessageCycle Up word limit in prompt Change command prompt constraint to reference functions Fix {} arguments error Fix issues with command arguments Fix issues with function arguments attached to assistant responses Evolve prompts to improve AI understand of function calls Fix issues with command result serialization Fix read_file path bug Fix a bunch of tests Abandon function memory role for now Fix history and summarization issues and its tests Change task_complete so the AI doesn't call it accidentally Remove agent manager as it is unused and now broken Fix token encoding model edge cases Fix openai provider tests Update setup test Cleanup imports Trigger CI Fix LogCycle and stringify commands Prompt change Reduce diff size due to moving functions Include missing test Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Extract openai API calls and retry at lowest level (Significant-Gravitas#3696) * Extract open ai api calls and retry at lowest level * Forgot a test * Gotta fix my local docker config so I can let pre-commit hooks run, ugh * fix: merge artiface * Fix linting * Update memory.vector.utils * feat: make sure resp exists * fix: raise error message if created * feat: rename file * fix: partial test fix * fix: update comments * fix: linting * fix: remove broken test * fix: require a model to exist * fix: BaseError issue * fix: runtime error * Fix mock response in test_make_agent * add 429 as errors to retry --------- Co-authored-by: k-boikov <64261260+k-boikov@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nick@ntindle.com> Co-authored-by: Reinier van der Leer <github@pwuts.nl> Co-authored-by: Nicholas Tindle <nicktindle@outlook.com> Co-authored-by: Luke K (pr-0f3t) <2609441+lc0rp@users.noreply.github.com> Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com> Add 16k gpt model Refactor module layout of command classes Remove nonessential commands Basic working version of functions support Make required args required Rewrite write_file to work with new arguments Replace gpt-3.5-turbo by gpt-3.5-turbo-0613 gpt-3.5-turbo-16k-0613 everywhere models that are not 0613 are gone Refactor message history and cycling Ask users to update their models Fix issues with token counting Fix issues with history trimming Remove self feedback and rename user_input to triggering_prompt when necessary Fix json parsing issues Document the steps of the interaction loop Fix issues with parsing reply Refactor MessageCycle Up word limit in prompt Change command prompt constraint to reference functions Fix {} arguments error Fix issues with command arguments Fix issues with function arguments attached to assistant responses Evolve prompts to improve AI understand of function calls Fix issues with command result serialization Fix read_file path bug Fix a bunch of tests Abandon function memory role for now Fix history and summarization issues and its tests Change task_complete so the AI doesn't call it accidentally Remove agent manager as it is unused and now broken Fix token encoding model edge cases Fix openai provider tests Update setup test Cleanup imports Trigger CI Fix LogCycle and stringify commands Prompt change Reduce diff size due to moving functions Include missing test Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Background
Fourth PR in the chain to organize the LLM interaction. See:
In a previous PR, I extracted a retry decorator from the embedding calls. This PR uses the same retry decorator on an extracted call to create chat completions from the OpenAI API.
Changes
autogpt.llm.llm_utils.retry_openai_api
moved toautogpt.llm.providers.openai.retry_api
autogpt.llm.providers.openai
and have been appropriately decorated to consistently handle API failure cases.autogpt.llm.llm_utils.create_chat_completion
refactored to use low-level wrapped api calls and update the budget manually.tests.unit.test_llm_utils
totests.unit.test_openai
Documentation
N/A, this is primarily code motion
Test Plan
N/A this is primarily code motion and the code in question has unit tests.
PR Quality Checklist