-
Notifications
You must be signed in to change notification settings - Fork 43.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AutoGPT: use config and LLM provider from core
#5286
Conversation
* Make .schema model names less pedantic * Rename LanguageModel* objects to ChatModel* or CompletionModel* where appropriate * Add `JSONSchema` utility class in `core.utils` * Use `JSONSchema` instead of untyped dicts for `Ability` and `CompletionModelFunction` parameter specification * Add token counting methods to `ModelProvider` interface and implementations
…core`; * Removed `autogpt.llm.base` and `autogpt.llm.utils` * `core` does things async, so `Agent.think()` and `Agent.execute()` are now also async * Renamed `dump()` and `parse()` on `JSONSchema` to `to_dict()` and `from_dict()` * Removed `MessageHistory` * Also, some typo's and linting fixes here and there
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
✅ Deploy Preview for auto-gpt-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR. |
Background
autogpt.core
Changes 🏗️
autogpt.core
core.prompting
module out ofcore.planning
model_providers
typing and tooling.schema
model names less pedanticLanguageModel*
objects toChatModel*
orCompletionModel*
where appropriateJSONSchema
utility class incore.utils
JSONSchema
instead of untyped dicts forAbility
andCompletionModelFunction
parameter specificationModelProvider
interface and implementationsautogpt
MessageHistory
autogpt.llm.*
with LLM infrastructure ofautogpt.core
autogpt.llm.base
andautogpt.llm.utils
core
does things async, soAgent.think()
andAgent.execute()
are now also asyncConfig
by.core.configuration
PR Quality Scorecard ✨
Have you used the PR description template?
+2 pts
Is your pull request atomic, focusing on a single change?
+5 pts
Yes atomic, like a bomb (thanks @BillSchumacher)
Have you linked the GitHub issue(s) that this PR addresses?
+5 pts
Have you documented your changes clearly and comprehensively?
+5 pts
Have you changed or added a feature?
-4 pts
+4 pts
+5 pts
Have you changed the behavior of Auto-GPT?
-5 pts
agbenchmark
to verify that these changes do not regress performance?+10 pts