Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(memory): chat history memory support #280

Merged
merged 87 commits into from
Nov 5, 2023
Merged
Show file tree
Hide file tree
Changes from 64 commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
501e195
init memory
dandansamax Jul 15, 2023
ce37411
Merge remote-tracking branch 'origin/master' into memory_tianqi
Benjamin-eecs Aug 21, 2023
f137dd9
feat: init memory subpkg design
Benjamin-eecs Aug 21, 2023
7f157dc
feat: init memory subpkg design
Benjamin-eecs Aug 21, 2023
65d41ce
[WIP] memory support
dandansamax Aug 23, 2023
d60c101
Add tests for ChatHistoryMemory & LosslessStorage
dandansamax Aug 24, 2023
7dac366
[WIP] update vector database
dandansamax Aug 24, 2023
dc6bf0a
[WIP] add qdrant test
dandansamax Aug 27, 2023
6ddc190
Add Qdrant test
dandansamax Aug 27, 2023
031fc1d
Implement VectorDBMemory
dandansamax Aug 27, 2023
8d48140
Add test for openai embedding
dandansamax Aug 28, 2023
89a5a89
Add test for VectorDBMemory
dandansamax Aug 28, 2023
3050ea5
Fix test error
dandansamax Aug 28, 2023
770998e
fix test error
dandansamax Aug 28, 2023
a9ad521
Merge branch 'master' into memory_liubo
dandansamax Aug 28, 2023
2f9d9bb
Solve JSON serialising for Enum
dandansamax Aug 31, 2023
fd556ed
Merge remote-tracking branch 'origin/master' into memory_liubo
dandansamax Aug 31, 2023
4f8254d
[WIP] integrate memory into ChatAgent
dandansamax Sep 3, 2023
73d305f
integrate memory into ChatAgent
dandansamax Sep 5, 2023
faace4a
Merge remote-tracking branch 'upstream/master' into memory_liubo
dandansamax Sep 5, 2023
45d950f
Update poetry.lock
dandansamax Sep 5, 2023
3ae936b
Add docs for Memories
dandansamax Sep 5, 2023
5fa1372
[WIP] Add docs for vector_storage
dandansamax Sep 5, 2023
8d76b2d
Update docs for vector_storage
dandansamax Sep 6, 2023
55d81de
Merge remote-tracking branch 'origin/master' into memory_liubo
dandansamax Sep 6, 2023
122d631
Update docs for lossless_storage
dandansamax Sep 6, 2023
b4fa836
remove vectordb and embedding
dandansamax Sep 6, 2023
45de006
Fix mypy error
dandansamax Sep 6, 2023
d711ae1
Comment out a metadict validation
dandansamax Sep 6, 2023
57451e9
Rename "lossless storage" to "dict storage"
dandansamax Sep 7, 2023
e0cb2e7
Rename
dandansamax Sep 7, 2023
9b6677b
Move class _CamelJSONEncoder to the file scope
dandansamax Sep 10, 2023
44443c9
Add doc
dandansamax Sep 10, 2023
6e807a9
Update doc
dandansamax Sep 10, 2023
17fafe7
Merge branch 'master' into chat_history_memory
Benjamin-eecs Sep 10, 2023
5e724cb
Revert "Merge branch 'master' into chat_history_memory"
dandansamax Sep 10, 2023
83dfc31
[WIP] Refactor memory: Add Memeory Record
dandansamax Sep 12, 2023
ff1ef43
Revert "Revert "Merge branch 'master' into chat_history_memory""
dandansamax Sep 12, 2023
be4c248
Revert changes of BaseMessage
dandansamax Sep 13, 2023
d63fa68
Merge remote-tracking branch 'origin/master' into chat_history_memory
dandansamax Sep 13, 2023
5911f63
Update poetry.lock
dandansamax Sep 13, 2023
7467512
Update docs
dandansamax Sep 13, 2023
a152306
Merge remote-tracking branch 'origin/master' into chat_history_memory
dandansamax Sep 13, 2023
0596679
Move self.memory type annotation to ChatAgent
dandansamax Sep 13, 2023
de6abc8
Merge remote-tracking branch 'origin/master' into chat_history_memory
dandansamax Sep 19, 2023
870debe
Revert change of `validate_meta_dict_keys`
dandansamax Sep 19, 2023
3425d1e
[WIP] Add Context Creator
dandansamax Oct 4, 2023
cac742c
Create seperate storage package
dandansamax Oct 4, 2023
2208ac5
Update memory get context api
dandansamax Oct 14, 2023
0af36df
Finish adding context creator
dandansamax Oct 14, 2023
ea8acba
Fix mypy error
dandansamax Oct 14, 2023
8b963d6
Merge remote-tracking branch 'origin/master' into chat_history_memory
dandansamax Oct 14, 2023
296f628
Fix test error
dandansamax Oct 14, 2023
3a9ca49
Update chat history memory docs
dandansamax Oct 15, 2023
0fdd363
Add docs for context creator
dandansamax Oct 15, 2023
304e894
Update docs
dandansamax Oct 15, 2023
c2f1576
Merge remote-tracking branch 'origin/master' into chat_history_memory
dandansamax Oct 23, 2023
5eb09c3
Update docstring
dandansamax Oct 24, 2023
213479c
Move MESSAGE_TYPES inside MemoryRecord
dandansamax Oct 24, 2023
3f19af1
Merge remote-tracking branch 'origin/master' into chat_history_memory
dandansamax Oct 26, 2023
a79a1fa
Remove TokenLimitTerminator from ChatAgent
dandansamax Oct 26, 2023
3fd7677
Fix test error
dandansamax Oct 26, 2023
365d4b6
Set terminated in step_token_exceed()
dandansamax Oct 26, 2023
ee73f26
Rename module "memories" and "storages"
dandansamax Oct 30, 2023
3c00cc4
Use `udpate_memory()`
dandansamax Nov 1, 2023
b636aba
Apply format and docs suggestions from code review
dandansamax Nov 1, 2023
2eb0831
Update agent docs and `update_memory()`
dandansamax Nov 1, 2023
b33401b
Apply variable renaming suggestions from code review
dandansamax Nov 1, 2023
2898316
Rename variables in memories module and format doc
dandansamax Nov 1, 2023
b44bd4f
Add `memory_record` attribute docstring
dandansamax Nov 1, 2023
f1bae12
Rename test dirs
dandansamax Nov 1, 2023
7ae2ecc
Rename dirs
dandansamax Nov 1, 2023
2aed0a9
Update dependency
dandansamax Nov 1, 2023
b0c23d0
Fix test error
dandansamax Nov 1, 2023
5ab77cc
Rename DictStorage to KeyValueStorage
dandansamax Nov 1, 2023
b50c391
Add docstrings for key-value storages
dandansamax Nov 1, 2023
11a6856
Allow multiple system message in ChatHistoryMemory
dandansamax Nov 2, 2023
b196aaa
Rename `DefaultContextCreator`
dandansamax Nov 2, 2023
3005803
Fix test error
dandansamax Nov 2, 2023
8aa5eea
Rename ImportanceBasedContextCreator
dandansamax Nov 2, 2023
9f85a66
Import context creators in __init__.py
dandansamax Nov 2, 2023
3fe5bf7
Update imports location
dandansamax Nov 3, 2023
68067cd
fix
lightaime Nov 5, 2023
020e9da
fix
lightaime Nov 5, 2023
8ac64a4
Merge branch 'master' into chat_history_memory
lightaime Nov 5, 2023
357dbab
Style fix
dandansamax Nov 5, 2023
81585fc
Modify test_search_wiki_with_ambiguity
dandansamax Nov 5, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 53 additions & 94 deletions camel/agents/chat_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,35 +24,16 @@
from camel.agents import BaseAgent
from camel.configs import BaseConfig, ChatGPTConfig
from camel.functions import OpenAIFunction
from camel.memories import BaseMemory, ChatHistoryMemory, MemoryRecord
from camel.memories.context_creator.default import DefaultContextCreator
from camel.messages import BaseMessage, FunctionCallingMessage, OpenAIMessage
from camel.models import BaseModelBackend, ModelFactory
from camel.responses import ChatAgentResponse
from camel.terminators import ResponseTerminator, TokenLimitTerminator
from camel.typing import ModelType, RoleType
from camel.terminators import ResponseTerminator
from camel.typing import ModelType, OpenAIBackendRole, RoleType
from camel.utils import get_model_encoding, openai_api_key_required


@dataclass(frozen=True)
class ChatRecord:
r"""Historical records of who made what message.

Attributes:
role_at_backend (str): Role of the message that mirrors OpenAI
message role that may be `system` or `user` or `assistant`.
message (BaseMessage): Message payload.
"""
role_at_backend: str
message: BaseMessage

def to_openai_message(self):
r"""Converts the payload message to OpenAI-compatible format.

Returns:
OpenAIMessage: OpenAI-compatible message
"""
return self.message.to_openai_message(self.role_at_backend)


@dataclass(frozen=True)
class FunctionCallingRecord:
r"""Historical records of functions called in the conversation.
Expand Down Expand Up @@ -86,8 +67,10 @@ class ChatAgent(BaseAgent):
system_message (BaseMessage): The system message for the chat agent.
model (ModelType, optional): The LLM model to use for generating
responses. (default :obj:`ModelType.GPT_3_5_TURBO`)
model_config (Any, optional): Configuration options for the LLM model.
(default: :obj:`None`)
model_config (BaseConfig, optional): Configuration options for the
LLM model. (default: :obj:`None`)
memory (BaseMemory, optional): The agent memory for managing chat
messages. (default: :obj:`ChatHistoryMemory()`)
message_window_size (int, optional): The maximum number of previous
messages to include in the context window. If `None`, no windowing
is performed. (default: :obj:`None`)
Expand All @@ -105,7 +88,9 @@ def __init__(
system_message: BaseMessage,
model: Optional[ModelType] = None,
model_config: Optional[BaseConfig] = None,
memory: Optional[BaseMemory] = None,
Obs01ete marked this conversation as resolved.
Show resolved Hide resolved
message_window_size: Optional[int] = None,
token_limit: Optional[int] = None,
dandansamax marked this conversation as resolved.
Show resolved Hide resolved
output_language: Optional[str] = None,
function_list: Optional[List[OpenAIFunction]] = None,
response_terminators: Optional[List[ResponseTerminator]] = None,
Expand All @@ -121,7 +106,6 @@ def __init__(

self.model: ModelType = (model if model is not None else
ModelType.GPT_3_5_TURBO)
self.message_window_size: Optional[int] = message_window_size

self.func_dict: Dict[str, Callable] = {}
if function_list is not None:
Expand All @@ -131,13 +115,16 @@ def __init__(

self.model_backend: BaseModelBackend = ModelFactory.create(
self.model, self.model_config.__dict__)
self.model_token_limit: int = self.model_backend.token_limit
self.model_token_limit = token_limit or self.model_backend.token_limit
context_creator = DefaultContextCreator(
self.model_backend.token_counter,
self.model_token_limit,
)
self.memory: BaseMemory = memory or ChatHistoryMemory(
context_creator, window_size=message_window_size)

self.terminated: bool = False
self.token_limit_terminator = TokenLimitTerminator(
self.model_token_limit)
self.response_terminators = response_terminators or []
self.stored_messages: List[ChatRecord]
self.init_messages()

def reset(self):
Expand All @@ -149,7 +136,6 @@ def reset(self):
"""
self.terminated = False
self.init_messages()
self.token_limit_terminator.reset()
for terminator in self.response_terminators:
terminator.reset()

Expand Down Expand Up @@ -182,6 +168,16 @@ def is_function_calling_enabled(self) -> bool:
"""
return len(self.func_dict) > 0

def update_memory(self, message: BaseMessage,
role: OpenAIBackendRole) -> None:
r"""Updates the agent memory with a new message.
Args:
message (BaseMessage): The new message to add to the stored
messages.
role (OpenAIBackendRole): The backend role type.
"""
self.memory.write_record(MemoryRecord(message, role))

def set_output_language(self, output_language: str) -> BaseMessage:
r"""Sets the output language for the system message. This method
updates the output language for the system message. The output
Expand Down Expand Up @@ -232,24 +228,10 @@ def init_messages(self) -> None:
r"""Initializes the stored messages list with the initial system
message.
"""
self.stored_messages = [ChatRecord('system', self.system_message)]

def update_messages(self, role: str,
message: BaseMessage) -> List[ChatRecord]:
r"""Updates the stored messages list with a new message.

Args:
role (str): Role of the message at the backend.
message (BaseMessage): The new message to add to the stored
messages.

Returns:
List[BaseMessage]: The updated stored messages.
"""
if role not in {'system', 'user', 'assistant', 'function'}:
raise ValueError(f"Unsupported role {role}")
self.stored_messages.append(ChatRecord(role, message))
return self.stored_messages
system_record = MemoryRecord(self.system_message,
OpenAIBackendRole.SYSTEM)
self.memory.clear()
self.memory.write_record(system_record)

def submit_message(self, message: BaseMessage) -> None:
dandansamax marked this conversation as resolved.
Show resolved Hide resolved
r"""Submits the externally provided message as if it were an answer of
Expand All @@ -260,7 +242,8 @@ def submit_message(self, message: BaseMessage) -> None:
message (BaseMessage): An external message to be added as an
assistant response.
"""
self.stored_messages.append(ChatRecord('assistant', message))
record = MemoryRecord(message, OpenAIBackendRole.ASSISTANT)
self.memory.write_record(record)
dandansamax marked this conversation as resolved.
Show resolved Hide resolved

@retry(wait=wait_exponential(min=5, max=60), stop=stop_after_attempt(5))
@openai_api_key_required
Expand All @@ -282,23 +265,21 @@ def step(
a boolean indicating whether the chat session has terminated,
and information about the chat session.
"""
messages = self.update_messages('user', input_message)
record = MemoryRecord(input_message, OpenAIBackendRole.USER)
self.memory.write_record(record)
dandansamax marked this conversation as resolved.
Show resolved Hide resolved

output_messages: List[BaseMessage]
info: Dict[str, Any]
called_funcs: List[FunctionCallingRecord] = []
while True:
# Format messages and get the token number
openai_messages: Optional[List[OpenAIMessage]]
num_tokens: int
openai_messages, num_tokens = self.preprocess_messages(messages)

# Terminate when number of tokens exceeds the limit
self.terminated, termination_reason = \
self.token_limit_terminator.is_terminated(num_tokens)
if self.terminated and termination_reason is not None:
return self.step_token_exceed(num_tokens, called_funcs,
termination_reason)
try:
openai_messages, num_tokens = self.memory.get_context()
except RuntimeError as e:
return self.step_token_exceed(e.args[1], called_funcs,
"max_tokens_exceeded")

# Obtain LLM's response and validate it
response = self.model_backend.run(openai_messages)
Expand All @@ -311,16 +292,23 @@ def step(
output_messages, finish_reasons, usage_dict, response_id = (
self.handle_stream_response(response, num_tokens))

if self.is_function_calling_enabled(
) and finish_reasons[0] == 'function_call':
if (self.is_function_calling_enabled()
and finish_reasons[0] == 'function_call'):
# Do function calling
func_assistant_msg, func_result_msg, func_record = (
self.step_function_call(response))

# Update the messages
messages = self.update_messages('assistant',
func_assistant_msg)
messages = self.update_messages('function', func_result_msg)
func_assistant_record = MemoryRecord(
func_assistant_msg,
OpenAIBackendRole.ASSISTANT,
)
func_result_record = MemoryRecord(
func_result_msg,
OpenAIBackendRole.FUNCTION,
)
self.memory.write_records(
[func_assistant_record, func_result_record])
dandansamax marked this conversation as resolved.
Show resolved Hide resolved
called_funcs.append(func_record)
else:
# Function calling disabled or chat stopped
Expand Down Expand Up @@ -352,35 +340,6 @@ def step(

return ChatAgentResponse(output_messages, self.terminated, info)

def preprocess_messages(
self,
messages: List[ChatRecord]) -> Tuple[List[OpenAIMessage], int]:
r"""Truncate the list of messages if message window is defined and
the current length of message list is beyond the window size. Then
convert the list of messages to OpenAI's input format and calculate
the number of tokens.

Args:
messages (List[ChatRecord]): The list of structs containing
information about previous chat messages.

Returns:
tuple: A tuple containing the truncated list of messages in
OpenAI's input format and the number of tokens.
"""

if (self.message_window_size
is not None) and (len(messages) > self.message_window_size):
messages = [ChatRecord('system', self.system_message)
] + messages[-self.message_window_size:]

openai_messages: List[OpenAIMessage]
openai_messages = [record.to_openai_message() for record in messages]
num_tokens = self.model_backend.count_tokens_from_messages(
openai_messages)

return openai_messages, num_tokens

def validate_model_response(self, response: Any) -> None:
r"""Validate the type of the response returned by the model.

Expand Down Expand Up @@ -480,7 +439,7 @@ def step_token_exceed(self, num_tokens: int,
ChatAgentResponse: The struct containing trivial outputs and
information about token number and called functions.
"""

self.terminated = True
output_messages: List[BaseMessage] = []

info = self.get_info(
Expand Down
10 changes: 7 additions & 3 deletions camel/agents/critic_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,10 @@
from colorama import Fore

from camel.agents import ChatAgent
from camel.memories import BaseMemory, MemoryRecord
from camel.messages import BaseMessage
from camel.responses import ChatAgentResponse
from camel.typing import ModelType
from camel.typing import ModelType, OpenAIBackendRole
from camel.utils import get_first_int, print_text_animated


Expand Down Expand Up @@ -49,13 +50,14 @@ def __init__(
system_message: BaseMessage,
model: ModelType = ModelType.GPT_3_5_TURBO,
model_config: Optional[Any] = None,
memory: Optional[BaseMemory] = None,
message_window_size: int = 6,
retry_attempts: int = 2,
verbose: bool = False,
logger_color: Any = Fore.MAGENTA,
) -> None:
super().__init__(system_message, model=model,
model_config=model_config,
model_config=model_config, memory=memory,
message_window_size=message_window_size)
self.options_dict: Dict[str, str] = dict()
self.retry_attempts = retry_attempts
Expand Down Expand Up @@ -106,7 +108,9 @@ def get_option(self, input_message: BaseMessage) -> str:
raise RuntimeError("Critic step failed.")

critic_msg = critic_response.msg
self.update_messages('assistant', critic_msg)
critic_record = MemoryRecord(critic_msg,
OpenAIBackendRole.ASSISTANT)
self.memory.write_records([critic_record])
dandansamax marked this conversation as resolved.
Show resolved Hide resolved
if self.verbose:
print_text_animated(self.logger_color + "\n> Critic response: "
f"\x1b[3m{critic_msg.content}\x1b[0m\n")
Expand Down
19 changes: 19 additions & 0 deletions camel/memories/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
# Licensed under the Apache License, Version 2.0 (the “License”);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========

from .base import BaseMemory
from .chat_history_memory import ChatHistoryMemory
from .memory_record import MemoryRecord
dandansamax marked this conversation as resolved.
Show resolved Hide resolved
dandansamax marked this conversation as resolved.
Show resolved Hide resolved

__all__ = ['BaseMemory', 'ChatHistoryMemory', 'MemoryRecord']
Loading