Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugin Support #757

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
a24ab0e
dynamically load commands from registry
kreneskyp Apr 6, 2023
e2a6ed6
adding tests for CommandRegistry
kreneskyp Apr 7, 2023
b4a0ef9
resolving test failures
kreneskyp Apr 7, 2023
3095591
switch to explicit module imports
kreneskyp Apr 7, 2023
bcc1b5f
Merge branch 'master' into command_registry
kreneskyp Apr 8, 2023
65b626c
Plugins initial
BillSchumacher Apr 11, 2023
0b955c0
Update README.md
BillSchumacher Apr 11, 2023
1af463b
Merge branch 'master' of https://github.com/Significant-Gravitas/Auto…
BillSchumacher Apr 16, 2023
b7a29e7
Refactor prompts into package, make the prompt able to be stored with…
BillSchumacher Apr 16, 2023
2761a5c
Add post_prompt hook
BillSchumacher Apr 16, 2023
e36b748
Add name and role to prompt generator object for maximum customization.
BillSchumacher Apr 16, 2023
68e26bf
Refactor main startup to store AIConfig on Agent for plugin usage.
BillSchumacher Apr 16, 2023
09a5b31
Add on_planning hook.
BillSchumacher Apr 16, 2023
ee42b4d
Add pre_instruction and on_instruction hooks.
BillSchumacher Apr 16, 2023
fc7db7d
Fix bad logic probably.
BillSchumacher Apr 16, 2023
00225e0
Fix another bad implementation detail.
BillSchumacher Apr 16, 2023
397627d
add post_instruction hook
BillSchumacher Apr 16, 2023
17478d6
Add post planning hook
BillSchumacher Apr 16, 2023
83403ad
add pre_command and post_command hooks.
BillSchumacher Apr 16, 2023
abb54df
Add custom commands to execute_command via promptgenerator
BillSchumacher Apr 16, 2023
05bafb9
Fix fstring bug.
BillSchumacher Apr 16, 2023
c544ceb
Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT i…
BillSchumacher Apr 16, 2023
3fadf2c
Blacked
BillSchumacher Apr 16, 2023
ec8ff0f
Merge branch 'command_registry' of https://github.com/kreneskyp/Auto-…
BillSchumacher Apr 16, 2023
df5cc33
move tests and cleanup.
BillSchumacher Apr 16, 2023
167628c
Add fields to disable the command if needed by configuration, blacked.
BillSchumacher Apr 16, 2023
c110f34
Finish integrating command registry
BillSchumacher Apr 17, 2023
03c1377
Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT i…
BillSchumacher Apr 17, 2023
c0aa423
Fix agent remembering do nothing command, use correct google function…
BillSchumacher Apr 17, 2023
81c65af
blacked
BillSchumacher Apr 17, 2023
708374d
fix linting
BillSchumacher Apr 17, 2023
23d3daf
Maybe fix tests, fix safe_path function.
BillSchumacher Apr 17, 2023
d394b03
Fix test
BillSchumacher Apr 17, 2023
3715ebc
Add hooks for chat completion
BillSchumacher Apr 17, 2023
fbd4e06
Add early abort functions.
BillSchumacher Apr 17, 2023
8386188
Fix early abort
BillSchumacher Apr 17, 2023
fe85f07
Fix early abort
BillSchumacher Apr 17, 2023
08ad320
moving load plugins into plugins from main, adding tests
evakhteev Apr 17, 2023
239aa3a
:art: Bring in plugin_template
TaylorBeeston Apr 17, 2023
dea5000
:bug: Fix pre_instruction
TaylorBeeston Apr 17, 2023
d23ada3
:bug: Fix on_planning
TaylorBeeston Apr 17, 2023
f784049
:label: Type plugins field in config
TaylorBeeston Apr 17, 2023
ea67b67
:bug: Minor type fixes
TaylorBeeston Apr 17, 2023
9705f60
'Refactored by Sourcery'
Apr 17, 2023
7f4e388
adding openai plugin loader
evakhteev Apr 17, 2023
9ed5e0f
adding plugin interface instantiation
evakhteev Apr 18, 2023
193c808
separating OpenAI Plugin base class
evakhteev Apr 18, 2023
9fd80a8
tests, model
evakhteev Apr 18, 2023
b84de4f
:recycle: Use AutoGPT template package for the plugin type
TaylorBeeston Apr 18, 2023
894026c
reshaping code and fixing tests
evakhteev Apr 18, 2023
c62c8c6
merge BillSchumacher/plugin-support, conflicts
evakhteev Apr 18, 2023
49e4b75
removing accidentially commited ./docker
evakhteev Apr 18, 2023
ef0216d
Merge pull request #6 from TaylorBeeston/type-fixes
BillSchumacher Apr 18, 2023
59a9986
Merge branch 'plugin-support' of https://github.com/BillSchumacher/Au…
BillSchumacher Apr 18, 2023
ebee041
fix merge
BillSchumacher Apr 18, 2023
b188c2b
Merge pull request #4 from evahteev/_openai-plugin-support
BillSchumacher Apr 18, 2023
085842d
Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT i…
BillSchumacher Apr 18, 2023
7d45de8
fix merge
BillSchumacher Apr 18, 2023
5813592
fix readme
BillSchumacher Apr 18, 2023
4701357
fix test
BillSchumacher Apr 18, 2023
86d3444
isort, add proper skips.
BillSchumacher Apr 18, 2023
221a4b0
I guess linux doesn't like this....
BillSchumacher Apr 19, 2023
3f2d14f
Fix isort?
BillSchumacher Apr 19, 2023
4c7b582
apply black
BillSchumacher Apr 19, 2023
c5b81b5
Adding Allowlisted Plugins via env
Apr 19, 2023
d552360
Merge pull request #10 from riensen/plugin-support
BillSchumacher Apr 19, 2023
23c650c
Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT i…
BillSchumacher Apr 19, 2023
d7679d7
Fix all commands and cleanup
BillSchumacher Apr 19, 2023
16f0e22
linting
BillSchumacher Apr 19, 2023
d876de0
Make tests a bit spicier and fix, maybe.
BillSchumacher Apr 19, 2023
d8fd834
linting
BillSchumacher Apr 19, 2023
c731675
Fix url
BillSchumacher Apr 19, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
7 changes: 7 additions & 0 deletions .env.template
Expand Up @@ -188,3 +188,10 @@ OPENAI_API_KEY=your-openai-api-key
# TW_CONSUMER_SECRET=
# TW_ACCESS_TOKEN=
# TW_ACCESS_TOKEN_SECRET=

################################################################################
### ALLOWLISTED PLUGINS
################################################################################

#ALLOWLISTED_PLUGINS - Sets the listed plugins that are allowed (Example: plugin1,plugin2,plugin3)
ALLOWLISTED_PLUGINS=
2 changes: 2 additions & 0 deletions .gitignore
Expand Up @@ -157,5 +157,7 @@ vicuna-*
# mac
.DS_Store

openai/

# news
CURRENT_BULLETIN.md
10 changes: 10 additions & 0 deletions .isort.cfg
@@ -0,0 +1,10 @@
[settings]
profile = black
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
ensure_newline_before_comments = true
line_length = 88
sections = FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER
skip = .tox,__pycache__,*.pyc,venv*/*,reports,venv,env,node_modules,.env,.venv,dist
16 changes: 16 additions & 0 deletions README.md
Expand Up @@ -254,6 +254,22 @@ export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
```

## Plugins

See https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template for the template of the plugins.

⚠️💀 WARNING 💀⚠️: Review the code of any plugin you use, this allows for any Python to be executed and do malicious things. Like stealing your API keys.

Drop the repo's zipfile in the plugins folder.

![Download Zip](https://raw.githubusercontent.com/BillSchumacher/Auto-GPT/master/plugin.png)

If you add the plugins class name to the `ALLOWLISTED_PLUGINS` in the `.env` you will not be prompted otherwise you'll be warned before loading the plugin:

```
ALLOWLISTED_PLUGINS=example-plugin1,example-plugin2,example-plugin3
```

## Setting Your Cache Type

By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone.
Expand Down
95 changes: 63 additions & 32 deletions autogpt/agent/agent.py
Expand Up @@ -19,18 +19,25 @@ class Agent:
memory: The memory object to use.
full_message_history: The full message history.
next_action_count: The number of actions to execute.
system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully.
Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals.

triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is:
Determine which next command to use, and respond using the format specified above:
The triggering prompt is not part of the system prompt because between the system prompt and the triggering
prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve.
system_prompt: The system prompt is the initial prompt that defines everything
the AI needs to know to achieve its task successfully.
Currently, the dynamic and customizable information in the system prompt are
ai_name, description and goals.

triggering_prompt: The last sentence the AI will see before answering.
For Auto-GPT, this prompt is:
Determine which next command to use, and respond using the format specified
above:
The triggering prompt is not part of the system prompt because between the
system prompt and the triggering
prompt we have contextual information that can distract the AI and make it
forget that its goal is to find the next task to achieve.
SYSTEM PROMPT
CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant)
TRIGGERING PROMPT

The triggering prompt reminds the AI about its short term meta task (defining the next task)
The triggering prompt reminds the AI about its short term meta task
(defining the next task)
"""

def __init__(
Expand All @@ -39,13 +46,17 @@ def __init__(
memory,
full_message_history,
next_action_count,
command_registry,
config,
system_prompt,
triggering_prompt,
):
self.ai_name = ai_name
self.memory = memory
self.full_message_history = full_message_history
self.next_action_count = next_action_count
self.command_registry = command_registry
self.config = config
self.system_prompt = system_prompt
self.triggering_prompt = triggering_prompt

Expand Down Expand Up @@ -73,6 +84,7 @@ def start_interaction_loop(self):
# Send message to AI, get response
with Spinner("Thinking... "):
assistant_reply = chat_with_ai(
self,
self.system_prompt,
self.triggering_prompt,
self.full_message_history,
Expand All @@ -81,6 +93,10 @@ def start_interaction_loop(self):
) # TODO: This hardcodes the model to use GPT3.5. Make this an argument

assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply)
for plugin in cfg.plugins:
if not plugin.can_handle_post_planning():
continue
assistant_reply_json = plugin.post_planning(self, assistant_reply_json)

# Print Assistant thoughts
if assistant_reply_json != {}:
Expand All @@ -89,14 +105,13 @@ def start_interaction_loop(self):
try:
print_assistant_thoughts(self.ai_name, assistant_reply_json)
command_name, arguments = get_command(assistant_reply_json)
# command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"]
if cfg.speak_mode:
say_text(f"I want to execute {command_name}")
except Exception as e:
logger.error("Error: \n", str(e))

if not cfg.continuous_mode and self.next_action_count == 0:
### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
# ### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
# Get key press: Prompt the user to press enter to continue or escape
# to exit
logger.typewriter_log(
Expand Down Expand Up @@ -168,30 +183,46 @@ def start_interaction_loop(self):
elif command_name == "human_feedback":
result = f"Human feedback: {user_input}"
else:
result = (
f"Command {command_name} returned: "
f"{execute_command(command_name, arguments)}"
for plugin in cfg.plugins:
if not plugin.can_handle_pre_command():
continue
command_name, arguments = plugin.pre_command(
command_name, arguments
)
command_result = execute_command(
self.command_registry,
command_name,
arguments,
self.config.prompt_generator,
)
result = f"Command {command_name} returned: " f"{command_result}"

for plugin in cfg.plugins:
if not plugin.can_handle_post_command():
continue
result = plugin.post_command(command_name, result)
if self.next_action_count > 0:
self.next_action_count -= 1
if command_name != "do_nothing":
memory_to_add = (
f"Assistant Reply: {assistant_reply} "
f"\nResult: {result} "
f"\nHuman Feedback: {user_input} "
)

memory_to_add = (
f"Assistant Reply: {assistant_reply} "
f"\nResult: {result} "
f"\nHuman Feedback: {user_input} "
)

self.memory.add(memory_to_add)
self.memory.add(memory_to_add)

# Check if there's a result from the command append it to the message
# history
if result is not None:
self.full_message_history.append(create_chat_message("system", result))
logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result)
else:
self.full_message_history.append(
create_chat_message("system", "Unable to execute command")
)
logger.typewriter_log(
"SYSTEM: ", Fore.YELLOW, "Unable to execute command"
)
# Check if there's a result from the command append it to the message
# history
if result is not None:
self.full_message_history.append(
create_chat_message("system", result)
)
logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result)
else:
self.full_message_history.append(
create_chat_message("system", "Unable to execute command")
)
logger.typewriter_log(
"SYSTEM: ", Fore.YELLOW, "Unable to execute command"
)
56 changes: 49 additions & 7 deletions autogpt/agent/agent_manager.py
@@ -1,10 +1,11 @@
"""Agent manager for managing GPT agents"""
from __future__ import annotations

from typing import Union
from typing import List, Union

from autogpt.config.config import Singleton
from autogpt.config.config import Config, Singleton
from autogpt.llm_utils import create_chat_completion
from autogpt.types.openai import Message


class AgentManager(metaclass=Singleton):
Expand All @@ -13,6 +14,7 @@ class AgentManager(metaclass=Singleton):
def __init__(self):
self.next_key = 0
self.agents = {} # key, (task, full_message_history, model)
self.cfg = Config()

# Create new GPT agent
# TODO: Centralise use of create_chat_completion() to globally enforce token limit
Expand All @@ -28,26 +30,44 @@ def create_agent(self, task: str, prompt: str, model: str) -> tuple[int, str]:
Returns:
The key of the new agent
"""
messages = [
messages: List[Message] = [
{"role": "user", "content": prompt},
]

for plugin in self.cfg.plugins:
if not plugin.can_handle_pre_instruction():
continue
if plugin_messages := plugin.pre_instruction(messages):
messages.extend(iter(plugin_messages))
# Start GPT instance
agent_reply = create_chat_completion(
model=model,
messages=messages,
)

# Update full message history
messages.append({"role": "assistant", "content": agent_reply})

plugins_reply = ""
for i, plugin in enumerate(self.cfg.plugins):
if not plugin.can_handle_on_instruction():
continue
if plugin_result := plugin.on_instruction(messages):
sep = "\n" if i else ""
plugins_reply = f"{plugins_reply}{sep}{plugin_result}"

if plugins_reply and plugins_reply != "":
messages.append({"role": "assistant", "content": plugins_reply})
key = self.next_key
# This is done instead of len(agents) to make keys unique even if agents
# are deleted
self.next_key += 1

self.agents[key] = (task, messages, model)

for plugin in self.cfg.plugins:
if not plugin.can_handle_post_instruction():
continue
agent_reply = plugin.post_instruction(agent_reply)

return key, agent_reply

def message_agent(self, key: str | int, message: str) -> str:
Expand All @@ -65,15 +85,37 @@ def message_agent(self, key: str | int, message: str) -> str:
# Add user message to message history before sending to agent
messages.append({"role": "user", "content": message})

for plugin in self.cfg.plugins:
if not plugin.can_handle_pre_instruction():
continue
if plugin_messages := plugin.pre_instruction(messages):
for plugin_message in plugin_messages:
messages.append(plugin_message)

# Start GPT instance
agent_reply = create_chat_completion(
model=model,
messages=messages,
)

# Update full message history
messages.append({"role": "assistant", "content": agent_reply})

plugins_reply = agent_reply
for i, plugin in enumerate(self.cfg.plugins):
if not plugin.can_handle_on_instruction():
continue
if plugin_result := plugin.on_instruction(messages):
sep = "\n" if i else ""
plugins_reply = f"{plugins_reply}{sep}{plugin_result}"
# Update full message history
if plugins_reply and plugins_reply != "":
messages.append({"role": "assistant", "content": plugins_reply})

for plugin in self.cfg.plugins:
if not plugin.can_handle_post_instruction():
continue
agent_reply = plugin.post_instruction(agent_reply)

return agent_reply

def list_agents(self) -> list[tuple[str | int, str]]:
Expand All @@ -86,7 +128,7 @@ def list_agents(self) -> list[tuple[str | int, str]]:
# Return a list of agent keys and their tasks
return [(key, task) for key, (task, _, _) in self.agents.items()]

def delete_agent(self, key: Union[str, int]) -> bool:
def delete_agent(self, key: str | int) -> bool:
"""Delete an agent from the agent manager

Args:
Expand Down