Skip to content

Releases: Significant-Gravitas/AutoGPT

AutoGPT v0.5.1

26 Apr 20:15
26324f2
Compare
Choose a tag to compare

New Features ✨

  • New OpenAI Models 🤖

    • gpt-4-turbo is now supported and set as the default SMART_LLM model.
    • Default FAST_LLM changed from gpt-3.5-turbo-16k to gpt-3.5-turbo.
    • Default EMBEDDING_MODEL changed from text-embedding-ada-002 to text-embedding-3-small.
    • Added support for gpt-4-0125-preview and gpt-4-turbo models.
  • Agent Protocol Server Enhancements 🌐

    • Made API server port configurable via AP_SERVER_PORT environment variable.
    • Added documentation for configuring the API port.
    • Implemented task cost tracking and logging in the AgentProtocolServer.
  • CLI Usability Improvements 💻

    • Added a check to ensure the specified API server port is available before running.
    • Improved handling of invalid or empty tasks provided by the user.
    • Display information on whether code execution is enabled or not on CLI startup.
  • Web Browsing Enhancements 🌐

    • Added browser extensions to handle cookie walls and ads when using Selenium.
    • Added extract_information function to extract pieces of information from webpage content based on a list of topics of interest. The read_webpage command now supports topics_of_interest and get_raw_content parameters to leverage this capability.
  • Telemetry & Error Tracking 📊

    • Integrated Sentry for telemetry and error tracking.
    • Added configuration flow and opt-in prompt for enabling telemetry.
    • Distinguish between production and dev environments based on VCS state.
    • Capture exceptions for LLM parsing errors and command failures.
  • File Storage Abstraction 📂

    • Fully abstracted file storage access with the FileStorage class.
    • Renamed FileWorkspace to FileStorage and updated associated classes and methods.
    • Updated AgentManager and AgentProtocolServer to use the new FileStorage system.
  • History Compression 📜

    • Implemented history compression to reduce token usage and increase longevity when using models with limited context windows.

Fixes 🔧

  • JSON Parsing Robustness

    • Implemented json_loads for more tolerant JSON parsing of llm responses.
    • Updated extract_dict_from_response to handle both json and JSON blocks in responses.
    • Fixed boolean value decoding issues in extract_dict_from_response.
  • Error Handling

    • Added error handling for loading non-existing agents.
    • Fixed handling of action_history related exceptions in CLI and Server modes.
    • Implemented self-correction mechanism for invalid LLM responses by appending error messages to the prompt.
  • Artifact & File Handling

    • Fixed handling of artifact modifications by setting agent_created attribute instead of registering a new Artifact.
    • Fixed read_file command in GCS and S3 workspaces.
  • Dependency Updates & Security Fixes

    • Updated aiohttp and fastapi dependencies to mitigate vulnerabilities.
    • Fixed Content-Type Header ReDoS vulnerabilities in python-multipart.
  • TTY Mode Enhancements

    • Fixed finish command behavior in TTY mode.
    • Agent now properly raises AgentTerminated exception to exit the loop.
  • Miscellaneous Fixes

    • Fixed summarize_text and QueryLanguageModel abilities to handle OpenAI API changes.
    • Improved representation of optional command parameters in prompts.
    • Fixed GCS workspace binary file upload.
    • Moved auto-gpt-plugin-template to regular dependencies to fix missing module error.

Chores & Refactoring 🧹

  • Updated agbenchmark and autogpt-forge dependencies across the project.
  • Upgraded OpenAI library to v1 and refactored code to accommodate API changes.
  • Improved logging by capturing errors raised during action execution.
  • Cleaned up unused imports and fixed linting issues across the codebase.
  • Sped up test_gcs_file_workspace by changing the fixture scope.

Pull Requests

Note: most of the changes mentioned above were made through direct commits. See also the full changelog.

  • [Documentation Update] Updating Using and Creating Abilities to use Action Annotations by @himsmittal in #6653
  • AGBenchmark: Codebase clean-up by @Pwuts in #6650
  • fix: cast port to int by @orarbel in #6643
  • Add documentation on how to use Agent Protocol in .env.template by @ntindle in #6569
  • feat(benchmark): JungleGym WebArena by @Pwuts in #6691
  • fix No modules named 'forge.sdk.abilities' and 'forge.sdk' #6537 #5810 by @kfern in #6571
  • feat(agent/web): Add browser extensions to deal with cookie walls and ads by @Pwuts in #6778
  • fix(forge): no module named 'sdk' by @MKdir98 in #6822
  • Adding support to allow for sending a message with the enter key by @zedatrix in #6378
  • Fixed revising constraints and best practices by @ThunderDrag in #6777
  • Set subprocess.PIPE on stdin and stderr and just let pytest run the currentt file when running main() by @aorwall in #5868
  • Update execute_code.py by @ehtec in #6903
  • fix(agent/execute_code): Disable code execution commands when Docker is unavailable by @kcze in #6888
  • OPEN-133: Fix trying to load non-existing agent by @kcze in #6938
  • Fixing support for AzureOpenAI by @edwardsp in #6927
  • OPEN-165: Improvement - check for duplicate command by @kcze in #6937
  • chore: Change agbenchmark to directory dependency in autogpt and forge by @Pwuts in #6946
  • Create Security Policy by @joycebrum in #6900
  • feat(agent): Abstract file storage access by @kcze in #6931
  • fix(autogpt): Fix GCS and S3 root path issue by @kcze in #7010
  • feat(autogpt/forge): Send exception details in agent protocol endpoints by @kcze in #7005
  • fix(agent): Handle action_history-related exceptions gracefully by @kcze in #6990
  • fix(autogpt/cli): Loop until non-empty task is provided by the user by @kcze in #6995
  • docs: Redirect AutoGPT users from Forge tutorial with warning by @ntindle in #7014
  • docs: replace polyfill.io by @SukkaW in #6952
  • feat(autogpt/cli): Check if port is available before running server by @kcze in #6996
  • feat(autogpt/cli): Display info if code execution is enabled by @kcze in #6997
  • feat(agent): Implement more tolerant json_loads function by @kcze in #7016
  • ci(agent): Matrix CI tests across Linux, macOS and Windows by @Pwuts in #7029
  • feat(agent): Handle OpenAI API key exceptions gracefully by @kcze in #6992
  • test(agent): Fix VCRpy request filter for cross-platform use by @Pwuts in #7040
  • ci(agent): Add macOS ARM64 to AutoGPT Python CI matrix by @Pwuts in #7041
  • security(agent): Replace unsafe pyyaml loader with SafeLoader by @matheudev in #7035
  • fix(agent): Fix check when loading an existing agent by @kcze in #7026
  • fix(agent, forge): Conform web_search.py to duckduckgo_search v5 by @kcze in #7045
  • fix(agent, forge): Conform web_search.py to duckduckgo_search v5 by @kcze in #7046
  • fix(agent): Make save_state behave like save as by @kcze in #7025
  • refactor(agent): Refactor & improve create_chat_completion by @Pwuts in #7082

New Contributors

Read more

AutoGPT v0.5.0

14 Dec 15:17
af3ebdf
Compare
Choose a tag to compare

First some important notes w.r.t. using the application:

  • run.sh has been renamed to autogpt.sh
  • The project has been restructured. The AutoGPT Agent is now located in autogpts/autogpt.
  • The application no longer uses a single workspace for all tasks. Instead, every task that you run the agent on creates a new workspace folder. See the usage guide for more information.

New features ✨

  • Agent Protocol 🔌
    Our agent now works with the Agent Protocol, a REST API that allows creating tasks and executing the agent's step-by-step process. This allows integration with other applications, and we also use it to connect to the agent through the UI.
  • UI 💻
    With the aforementioned Agent Protocol integration comes the benefit of using our own open-source Agent UI. Easily create, use, and chat with multiple agents from one interface.
    When starting the application through the project's new CLI, it runs with the new frontend by default, with benchmarking capabilities. Running autogpt.sh serve in the subproject folder (autogpts/autogpt) will also serve the new frontend, but without benchmarking functionality.
    Running the application the "old-fashioned" way, with the terminal interface (let's call it TTY mode), is still possible with autogpt.sh run.
  • Resuming agents 🔄️
    In TTY mode, the application will now save the agent's state when quitting, and allows resuming where you left off at a later time!
  • GCS and S3 workspace backends 📦
    To further support running the application as part of a larger system, Google Cloud Storage and S3 workspace backends were added. Configuration options for this can be found in .env.template.
  • Documentation Rewrite 📖
    The documentation has been restructured and mostly rewritten to clarify and simplify the instructions, and also to accommodate the other subprojects that are now in the repo.
  • New Project CLI 🔧
    The project has a new CLI to provide easier usage of all of the components that are now in the repo: different agents, frontend and benchmark. More info can be found here.
  • Docker dev build 🐳
    In addition to the regular Docker release images (latest, v0.5.0 in this case), we now also publish a latest-dev image that always contains the latest working build from master. This allows you to try out the latest bleeding edge version, but be aware that these builds may contain bugs!

Architecture changes & improvements 👷🏼

  • PromptStrategy
    To make it easier to harness the power of LLMs and use them to fulfil tasks within the application, we adopted the PromptStrategy class from autogpt.core (AKA re-arch) to encapsulate prompt generation and response parsing throughout the application.

  • Config modularization
    To reduce the complexity of the application's config structure, parts of the monolithic Config have been moved into smaller, tightly scoped config objects. Also, the logic for building the configuration from environment variables was decentralized to make it all a lot more maintainable.
    This is mostly made possible by the autogpt.core.configuration module, which was also expanded with a few new features for it. Most notably, the new from_env attribute on the UserConfigurable field decorator and corresponding logic in SystemConfiguration.from_env() and related functions.

  • Monorepo
    As mentioned, the repo has been restructured to accommodate the AutoGPT Agent, Forge, AGBenchmark and the new Frontend.

    • AutoGPT Agent has been moved to autogpts/autogpt
    • Forge now lives in autogpts/forge, and the project's new CLI makes it easy to create new Forge-based agents.
    • AGBenchmark -> benchmark
    • Frontend -> frontend

    See also the README.

Pull Requests

Note: most of the changes mentioned above were made through direct commits. See also the full changelog.

  • Sync release into master by @lc0rp in #5118
  • Add files via upload by @ntindle in #5130
  • Fix speak mode with elevenlabs falling into error by @dannaward in #5127
  • Agent loop v2: Planning & Task Management (part 2) by @Pwuts in #5077
  • fix(docker): add gcc installation in order to build psutil by @alexsoyes in #5059
  • Fix elevenLabs config error by @dannaward in #5131
  • Update init.py to support image_gen commands by @dmoham1476 in #5137
  • Fixed stream elements speech function by @NeonN3mesis in #5146
  • Restructure Repo by @merwanehamadi in #5160
  • Refactor/remove abstract singleton as voice base parent by @collijk in #4931
  • Add support for args to execute_python_file by @MauroDruwel in #3972
  • read me update for monorepo by @SilenNaihin in #5199
  • Change agbenchmark folder by @merwanehamadi in #5203
  • Integrate benchmark and autogpt by @merwanehamadi in #5208
  • Migrate AutoGPT agent to poetry by @Pwuts in #5219
  • Adding Space After Colon in Console Input Prompt by @WilliamEspegren in #5211
  • AutoGPT: use config and LLM provider from core by @Pwuts in #5286
  • Rename Auto-GPT to AutoGPT by @merwanehamadi in #5301
  • AutoGPT: Move all the Agent's prompt generation code into a PromptStrategy by @Pwuts in #5363
  • Fixed stacking prompt instructions by @NeonN3mesis in #5520
  • AutoGPT: Implement Agent Protocol by @Pwuts in #5612
  • Fix typo in exceptions.py by @eltociear in #5813
  • docs: fix typos in markdown files by @shresthasurav in #5871
  • feat: Add support for new models and features from OpenAI's November 6 update by @Pwuts in #6147
  • Improve the accuracy of the extract_dict_from_response method's JSON extraction. by @HawkClaws in #5458
  • Allow port numbers to be used in local host names. e.g. localhost:8888. by @aaron13100 in #5318
  • Streamline documentation for getting started by @Pwuts in #6335
  • Fix docker build. Changing agbenchmark dependency as git reference instead of folder reference. by @warnyul in #6274
  • refactor(agent/config): Modularize Config and revive Azure support by @Pwuts in #6497
  • feat(agent/workspace): Add GCS and S3 FileWorkspace providers by @Pwuts in #6485
  • Add dependencies required to use PostgreSQL by @ntindle in #6558
  • Updating duckduckgo-search version to v4.0.0 to unbreak web_search command by @zedatrix in #6557 d820239

New Contributors 🧙🏼

agbenchmark-v0.0.10

17 Sep 00:02
4e43e4b
Compare
Choose a tag to compare
Update CI pipy (#5240)

AutoGPT v0.4.7

11 Aug 17:55
bb3a06d
Compare
Choose a tag to compare

AutoGPT v0.4.7 introduces initial REST API support, powered by e2b's agent protocol SDK. It also includes improvements to prompt generation and support for our new benchmarking tool, Auto-GPT-Benchmarks.

We've also moved our documentation to Material Theme at https://docs.agpt.co. And, as usual, we've squashed a few bugs and made under-the-hood improvements.

What's Changed

  • Integrate AutoGPT with Auto-GPT-Benchmarks by @merwanehamadi in #4987
  • Add API via agent-protocol SDK by @ValentaTomas in #5044
  • Fix workspace crashing by @merwanehamadi in #5041
  • Fix runtime error in the API by @ValentaTomas in #5047
  • Change workspace location by @merwanehamadi in #5048
  • Remove delete file by @merwanehamadi in #5050
  • Sync release v0.4.6 with patches back into master by @Pwuts in #5065
  • Improve prompting and prompt generation infrastructure by @Pwuts in #5076
  • Remove append to file by @merwanehamadi in #5051
  • Add categories to command registry by @Pwuts in #5063
  • Do not load disabled commands (faster exec & benchmark runs) by @lc0rp in #5078
  • Verify model compatibility if OPENAI_FUNCTIONS is set by @Pwuts in #5075
  • fix: Nonetype error from command_name.startswith() by @lc0rp in #5079
  • slips of the pen (bloopers) in autogpt/core part of the repo by @cyrus-hawk in #5045
  • Add information on how to improve Auto-GPT with agbenchmark by @merwanehamadi in #5056
  • Use modern material theme for docs by @lc0rp in #5035
  • Move more app files to app package by @collijk in #5036
  • Pass TestSearch benchmark consistently (Add browse_website TOKENS_TO_TRIGGER_SUMMARY) by @lc0rp in #5092
  • Bulleting update & version bump by @lc0rp in #5112

New Contributors

Full Changelog: v0.4.6...v0.4.7

Auto-GPT v0.4.6

28 Jul 12:42
ec47a31
Compare
Choose a tag to compare

What's Changed

Improvements ✨

  • Integrate plugin.handle_text_embedding hook by @zhanglei1172 in #2804
  • Agent loop v2: Planning & Task Management (part 1: refactoring) by @Pwuts in #4799
  • Add config options to documentation site by @lc0rp in #5034
  • Gracefully handle plugin loading failure by @eyalk11 in #4994
  • Update memory.md with more warnings about memory being disabled by @NeonN3mesis in #5008

Bugfixes 🐛

  • Fix orjson encoding text with UTF-8 surrogates by @ido777 in #3666
  • Fix execute_python_file workspace mount & Windows path formatting by @sohrabsaran in #4996
  • Fix configuring TTS engine by @Pwuts in #5005
  • Bugfix/remove breakpoint from embedding function by @collijk in #5022
  • Bugfix/bad null byte by @collijk in #5033
  • Fix path processing by @Pwuts in #5032
  • Filepath fixes and docs updates to specify when relative paths are expected by @lc0rp in #5042

Re-arch 🏗️

  • runner.cli parsers set as a library by @ph-ausseil in #5021
  • fix the forgotten + symbol in parse_ability_result(...) in parser.py by @cyrus-hawk in #5028
  • Move all application code to an application subpackage by @collijk in #5026

New Contributors

Full Changelog: v0.4.5...v0.4.6

Auto-GPT v0.4.5

19 Jul 18:18
d76317f
Compare
Choose a tag to compare

This maintenance release includes under-the-hood improvements and bug fixes, such as more accurate token counts for OpenAI functions, faster CI builds, improved plugin handling, and refactoring of the Config class for better maintainability.

Release Highlights 🌟

We have released some documentation updates, including:

How to share system logs

Auto-GPT re-architecture documentation

New Contributors & Notable Catalysts 🦾

What's Changed 📜

Full Changelog: v0.4.4...v0.4.5

Auto-GPT v0.4.4

11 Jul 22:01
2240033
Compare
Choose a tag to compare

Auto-GPT v0.4.4 is dedicated to the core re-arch tram, led by @collijk.

Release Highlights 🌟

This release is noteworthy for two reasons.

Auto-GPT-4

Firstly, it comes hot on the heels of OpenAI's GA release of GPT-4. Auto-GPT users have eagerly awaited the opportunity to unlock more power via a GPT-4 model pairing. In v0.4.4, the SMART_LLM (formerly SMART_LLM_MODEL) defaults to GPT-4 once again, and we have implemented adjustments to ensure the correct usage of SMART_LLM and FAST_LLM (formerly FAST_LLM_MODEL) throughout the code-base. The smarter option is used consistently for areas requiring state-of-the-art accuracy, such as agent command selection. At the same time, the faster LLM assists with tasks that even the speedier GPT-3.5-turbo excels at, like summarization.

Note: GPT-4 is costlier, so please review your SMART_* and FAST_* settings. You can also use --gpt3only and --gpt4only command line flags to adjust your model preferences at runtime.

Autogpt/core

The second reason, and the reason for the dedication at the beginning of these release notes, is equally exciting. The much-anticipated re-arch is now available! The team, led by @collijk, has worked tirelessly over the past few months to put the "Auto" back in Auto-GPT, nearly doubling the code available in the master branch. The autogpt/core folder contains the work from the re-arch project, which is now systematically making its way to the rest of the application, starting with the Configuration modules. Watch for improvements over the next few weeks. There is still much to do, so if you wish to assist, please check out this issue.

New Contributors & Notable Catalysts 🦾

What's Changed 📜

Besides the highlights above, this release cleans up longstanding Azure configuration rough edges, fixes plugin incompatibilities and plugs security. Read on for a detailed list of changes.

Full Changelog: v0.4.3...v0.4.4

Auto-GPT v0.4.3

28 Jun 08:16
80151dd
Compare
Choose a tag to compare

We're excited to present the 0.4.3 maintenance release of Auto-GPT! This update primarily focuses on refining the LLM command execution, extending support for OpenAI's latest models (including the powerful GPT-3 16k model), and laying the groundwork for future compatibility with OpenAI's innovative function calling feature.

Release Highlights 🌟

  • OpenAI API Key Prompt: Auto-GPT will now courteously prompt users for their OpenAI API key, if it's not already provided.
  • Summarization Enhancements: We've optimized Auto-GPT's use of the LLM context window even further, boosting the effectiveness of summarization tasks.
  • JSON Memory Reading: Support for reading memories from JSON files has been improved, resulting in enhanced task execution.
  • New "replace_in_file" Command: This nifty new feature allows Auto-GPT to modify files without loading them entirely.
  • Enhanced Token Counting: We've refined our token counting system to provide more precise cost estimates.

Deprecated Commands ❌

As part of our ongoing commitment to refining Auto-GPT, the following commands, which we determined to be either better suited as plugins or redundant, have been retired from the core application:

  • analyze_code
  • write_tests
  • improve_code
  • audio_text
  • web_playwright
  • web_requests

Progress Update on Re-Architecting 🚧

As you may recall, we recently embarked on a significant re-architecting journey to future-proof the Auto-GPT project. We're thrilled to report that elements of this massive overhaul are now being integrated back into the core application. For instance, you may notice less reliance on global state being passed around via singletons.

Stay tuned for further updates and advancements in our future releases! Head over to the discussion forums or discord to share your feedback on this release, and we appreciate your continued support.

New Contributors & Notable Catalysts 🦾

What's Changed 📜

  • Add replace_in_file command to change occurrences of text in a file by @bfalans in #4565
  • Update OpenAI model info and remove duplicate modelsinfo.py by @Pwuts in #4700
  • Implement loading MemoryItems from file in JSONFileMemory by @Pwuts in #4703
  • Count tokens with tiktoken by @merwanehamadi in #4704
  • Refactor module layout of command classes by @erik-megarad in #4706
  • Remove analyze code by @merwanehamadi in #4705
  • Remove write_tests and improve_code by @merwanehamadi in #4707
  • Remove app commands, audio text and playwright by @merwanehamadi in #4711
  • Improve plugin backward compatibility by @lc0rp in #4716
  • Fix summarization happening in first cycle by @merwanehamadi in #4719
  • Bulletin.md update for 0.4.1 release by @lc0rp in #4721
  • Use JSON format for commands signature by @merwanehamadi in #4714
  • Fix execute_command coming from plugins in 0.4.1 by @erik-megarad in #4730
  • Fix execute_command coming from plugins by @erik-megarad in #4729
  • Pass config everywhere in order to get rid of singleton by @merwanehamadi in #4666
  • Remove config from command decorator by @merwanehamadi in #4736
  • Fix issues with execute_python_code responses by @erik-megarad in #4738
  • Retry 503 OpenAI errors by @merwanehamadi in #4745
  • Sync release v0.4.1 back into master by @lc0rp in #4741
  • Merge Release v0.4.2 back to master by @merwanehamadi in #4747
  • Remove config singleton by @merwanehamadi in #4737
  • Make JSON errors more silent by @merwanehamadi in #4748
  • Fix up Python execution commands by @Wladastic in #4756
  • OpenAI Functions Support by @erik-megarad in #4683
  • Create run_task python hook to interface with benchmarks by @merwanehamadi in #4778
  • ❇️ Improved OpenAI API Key Insert to Env by @Qoyyuum in #2486
  • Link all challenges to benchmark python hook by @merwanehamadi in #4786
  • Prevent docker-compose.yml and Dockerfile from being written by @erik-megarad in #4761
  • Only take subclasses of AutoGPTPluginTemplate as plugins by @ppetermann in #4345
  • Filtering out ANSI escape codes in printed assistant thoughts by @lc0rp in #4812
  • Unregister commands incompatible with current config. by @lc0rp in #4815
  • Bulletin.md updates and version toggling by @lc0rp in #4816
  • Release v0.4.3 by @lc0rp in #4802

Full Changelog: v0.4.2...v0.4.3

Auto-GPT v0.4.3-alpha

27 Jun 10:03
Compare
Choose a tag to compare
Pre-release

We're excited to present the 0.4.3 maintenance release of Auto-GPT! This update primarily focuses on refining the LLM command execution, extending support for OpenAI's latest models (including the powerful GPT-3 16k model), and laying the groundwork for future compatibility with OpenAI's innovative function calling feature.

Release Highlights 🌟

  • OpenAI API Key Prompt: Auto-GPT will now courteously prompt users for their OpenAI API key, if it's not already provided.
  • Summarization Enhancements: We've optimized Auto-GPT's use of the LLM context window even further, boosting the effectiveness of summarization tasks.
  • JSON Memory Reading: Support for reading memories from JSON files has been improved, resulting in enhanced task execution.
  • New "replace_in_file" Command: This nifty new feature allows Auto-GPT to modify files without loading them entirely.
  • Enhanced Token Counting: We've refined our token counting system to provide more precise cost estimates.

Deprecated Commands ❌

As part of our ongoing commitment to refining Auto-GPT, the following commands, which we determined to be either better suited as plugins or redundant, have been retired from the core application:

  • analyze_code
  • write_tests
  • improve_code
  • audio_text
  • web_playwright
  • web_requests

Progress Update on Re-Architecting 🚧

As you may recall, we recently embarked on a significant re-architecting journey to future-proof the Auto-GPT project. We're thrilled to report that elements of this massive overhaul are now being integrated back into the core application. For instance, you may notice less reliance on global state being passed around via singletons.

Stay tuned for further updates and advancements in our future releases! Head over to the discussion forums or discord to share your feedback on this release, and we appreciate your continued support.

New Contributors & Notable Catalysts 🦾

What's Changed 📜

  • Add replace_in_file command to change occurrences of text in a file by @bfalans in #4565
  • Update OpenAI model info and remove duplicate modelsinfo.py by @Pwuts in #4700
  • Implement loading MemoryItems from file in JSONFileMemory by @Pwuts in #4703
  • Count tokens with tiktoken by @merwanehamadi in #4704
  • Refactor module layout of command classes by @erik-megarad in #4706
  • Remove analyze code by @merwanehamadi in #4705
  • Remove write_tests and improve_code by @merwanehamadi in #4707
  • Remove app commands, audio text and playwright by @merwanehamadi in #4711
  • Fix summarization happening in first cycle by @merwanehamadi in #4719
  • Bulletin.md update for 0.4.1 release by @lc0rp in #4721
  • Use JSON format for commands signature by @merwanehamadi in #4714
  • Fix execute_command coming from plugins in 0.4.1 by @erik-megarad in #4730
  • Fix execute_command coming from plugins by @erik-megarad in #4729
  • Pass config everywhere in order to get rid of singleton by @merwanehamadi in #4666
  • Remove config from command decorator by @merwanehamadi in #4736
  • Fix issues with execute_python_code responses by @erik-megarad in #4738
  • Retry 503 OpenAI errors by @merwanehamadi in #4745
  • Sync release v0.4.1 back into master by @lc0rp in #4741
  • Merge Release v0.4.2 back to master by @merwanehamadi in #4747
  • Remove config singleton by @merwanehamadi in #4737
  • Make JSON errors more silent by @merwanehamadi in #4748
  • Fix up Python execution commands by @Wladastic in #4756
  • OpenAI Functions Support by @erik-megarad in #4683
  • Create run_task python hook to interface with benchmarks by @merwanehamadi in #4778
  • ❇️ Improved OpenAI API Key Insert to Env by @Qoyyuum in #2486
  • Link all challenges to benchmark python hook by @merwanehamadi in #4786
  • Prevent docker-compose.yml and Dockerfile from being written by @erik-megarad in #4761
  • Only take subclasses of AutoGPTPluginTemplate as plugins by @ppetermann in #4345

Full Changelog: v0.4.2...v0.4.3-alpha

Auto-GPT v0.4.2 (hotfix)

19 Jun 20:45
Compare
Choose a tag to compare

The 503 error has been more frequent the past hours so we added a hotfix to retry the call if this error is returned, otherwise Auto-GPT stops.