-
-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory class field not exposed in AgentExecutor #80
Comments
Hey @davidmigloz, |
OpenAIFunctionsAgent
with Memory?
Hey @orenagiv, Thanks for opening the issue. Indeed, the memory field was not exposed in the I've just fixed it and I'll ship a new release later today. For your example, it will look something like this: void main() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
final chatOpenAI = ChatOpenAI(
apiKey: openaiApiKey,
temperature: 0.0,
maxTokens: 512,
model: 'gpt-3.5-turbo-0613',
);
final tool = BaseTool.fromFunction(
name: 'search',
description: 'Tool for searching the web',
inputJsonSchema: const {
'type': 'object',
'properties': {
'query': {
'type': 'string',
'description': 'The query to search for',
},
'n': {
'type': 'number',
'description': 'The number of results to return',
},
},
'required': ['query'],
},
func: (final Map<String, dynamic> toolInput) async {
final query = toolInput['query'];
final n = toolInput['n'];
print('Function Call Arguments: $query | n=$n');
// TODO: implement search
// Mock results
final functionResponse = [
'result1',
'result2',
];
return 'Results:\n$functionResponse';
},
);
final agent = OpenAIFunctionsAgent.fromLLMAndTools(
llm: chatOpenAI,
tools: [tool],
extraPromptMessages: [
const MessagesPlaceholder(variableName: BaseMemory.defaultMemoryKey),
],
);
final memory = ConversationBufferMemory(returnMessages: returnMessages);
final executor = AgentExecutor(
agent: agent,
tools: [tool],
memory: memory,
);
while (true) {
// Get user input.
stdout.write('> ');
final query = stdin.readLineSync() ?? '';
// Chat Complete by OpenAI.
final aiMessage = await executor.run(query);
// Output AI response.
stdout.writeln(aiMessage);
}
} Note that I'm using I've also added a test that verifies this scenario: langchain_dart/packages/langchain_openai/test/agents/functions_test.dart Lines 108 to 113 in d58f117
Let me know if that covers your use case 🙂 |
This is awesome! Quick question: I've started working on a PR with the memory updates (that you already applied :) and also with something like this:
And in
I'm not yet very familiar with LangChain concepts, so maybe the way LangChain chains & tools work behind the scenes cause my suggestion above to be redundant? |
Hey @davidmigloz |
Indeed the function message is not stored, as when you add memory to an agent executor it will only save the inputs and outputs of the agent executor (whereas the function call is an internal call of the agent). Thanks for the PR! I'll review it now. |
Hey @orenagiv , I've just pushed some improvements in the I've updated the test to use this approach:
Now it stores all the messages properly, including the function message: Let me now if that works for you and thanks again for the PR! |
Thanks so much @davidmigloz ! I noticed that the function-messages are indeed necessary to be part of the memory (that is then sent back to the Chat Model as the messages-list) because in many cases I want it to "remember" specific properties that existed in previous function-messages, so it can use them in the next tool API. Say, why did you choose to saveContext with a P.S. |
Works like a charm btw @davidmigloz :)) |
Indeed, good catch I didn't notice it. Now all the messages should be saved with the proper type.
That's a good question. The tools passed to the Agent allow it to know what actions it can take. The agent needs to know about the tools so that it can decide which one to invoke. The tools passed to the Executor allow it to actually execute the tools when the Agent decides to invoke one. The Executor needs the actual functions representing the tools so that it can call them. But you are right, it is kind of redundant, as the Executor can get the tools from the Agent. Now you just need to pass the tools to the agent. Thanks for helping to improve the API! |
Thanks @davidmigloz ! So if I understand correctly, now at the Executor level, I don't need to define tools nor the memory.
|
Correct! now you just need to pass the agent to the |
Hey @davidmigloz!
Started to use this (amazing) Dart package 🙏
And was wondering what's the recommended way of using the
OpenAIFunctionsAgent
with memory?Should I handle the memory (e.g.
ConversationBufferMemory
) separately, and trigger the agent executor.run() with the list of messages from memory?Or is there a way to init the
OpenAIFunctionsAgent
with a memory definition?My current "playground" example:
When handling the memory separately (as in the example above), I'm not sure how to properly store the OpenAI function-response messages (the one with the role "function" that OpenAI expects to receive as the function-call response, and should be part of the list of messages).
I mean, I would hoped to find something like:
But couldn't find the proper way.
The text was updated successfully, but these errors were encountered: