Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory class field not exposed in AgentExecutor #80

Closed
orenagiv opened this issue Aug 3, 2023 · 11 comments
Closed

Memory class field not exposed in AgentExecutor #80

orenagiv opened this issue Aug 3, 2023 · 11 comments
Assignees
Labels
c:agents Agents. t:bug Something isn't working
Milestone

Comments

@orenagiv
Copy link
Contributor

orenagiv commented Aug 3, 2023

Hey @davidmigloz!
Started to use this (amazing) Dart package 🙏
And was wondering what's the recommended way of using the OpenAIFunctionsAgent with memory?
Should I handle the memory (e.g. ConversationBufferMemory) separately, and trigger the agent executor.run() with the list of messages from memory?
Or is there a way to init the OpenAIFunctionsAgent with a memory definition?

My current "playground" example:

import 'dart:convert';
import 'dart:io';

import 'package:langchain/langchain.dart';
import 'package:langchain_openai/langchain_openai.dart';

void main(final List<String> arguments) async {
  final chatOpenAI = ChatOpenAI(
    apiKey: {{OpenAIkey}},
    temperature: 0.0,
    maxTokens: 512,
    model: 'gpt-3.5-turbo-0613',
  );

  final tool = Tool.fromFunction(
    name: 'search',
    description: '''
    Search for...
    parameters: {
      'type': 'object',
      'properties': {
        'param1': {
          'type': 'string',
          'description': '....',
        },
        'param2': {
          'type': 'number',
          'description': '...',
        },
      },
      'required': ['param1'],
    } 
    ''',
    func: (final String toolInput) async {
      final arguments = jsonDecode(toolInput);

      print('Function Call Arguments: $arguments');
      // TODO: implement search
      
      // Mock results
      final functionResponse = [
        'result1',
        'result2',
      ];

      return 'Suggest only one result to the user out of the following JSON list:\n$functionResponse';
    },
  );

  final agent = OpenAIFunctionsAgent.fromLLMAndTools(
    llm: chatOpenAI,
    tools: [tool],
  );
  final executor = AgentExecutor(
    agent: agent,
    tools: [tool],
  );

  final memory = ConversationBufferMemory();

  while (true) {
    // Get user input.
    stdout.write('> ');
    final query = stdin.readLineSync() ?? '';
    final userMessage = ChatMessage.human(query);

    // Add user input to memory.
    memory.chatHistory.addUserChatMessage(userMessage.content.trim());
    final messages = await memory.chatHistory.getChatMessages();

    // Chat Complete by OpenAI.
    final aiMessage = await executor.run(messages);

    // Store AI response in memory.
    memory.chatHistory.addAIChatMessage(aiMessage);

    // Output AI response.
    stdout.writeln(aiMessage);
  }
}

When handling the memory separately (as in the example above), I'm not sure how to properly store the OpenAI function-response messages (the one with the role "function" that OpenAI expects to receive as the function-call response, and should be part of the list of messages).

I mean, I would hoped to find something like:

memory.chatHistory.addFunctionChatMessage()

But couldn't find the proper way.

@orenagiv
Copy link
Contributor Author

orenagiv commented Aug 3, 2023

Hey @davidmigloz,
I see what's missing.
Will send a PR soon.

@davidmigloz davidmigloz changed the title How to properly use the OpenAIFunctionsAgent with Memory? Memory class field not exposed in AgentExecutor Aug 4, 2023
@davidmigloz davidmigloz added t:bug Something isn't working c:agents Agents. labels Aug 4, 2023
@davidmigloz davidmigloz self-assigned this Aug 4, 2023
@davidmigloz davidmigloz added this to the v0.0.4 milestone Aug 4, 2023
@davidmigloz
Copy link
Owner

davidmigloz commented Aug 4, 2023

Hey @orenagiv,

Thanks for opening the issue. Indeed, the memory field was not exposed in the AgentExecutor class, which was preventing adding memory to the agent.

I've just fixed it and I'll ship a new release later today.

For your example, it will look something like this:

void main() async {
  final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
  final chatOpenAI = ChatOpenAI(
    apiKey: openaiApiKey,
    temperature: 0.0,
    maxTokens: 512,
    model: 'gpt-3.5-turbo-0613',
  );

  final tool = BaseTool.fromFunction(
    name: 'search',
    description: 'Tool for searching the web',
    inputJsonSchema: const {
      'type': 'object',
      'properties': {
        'query': {
          'type': 'string',
          'description': 'The query to search for',
        },
        'n': {
          'type': 'number',
          'description': 'The number of results to return',
        },
      },
      'required': ['query'],
    },
    func: (final Map<String, dynamic> toolInput) async {
      final query = toolInput['query'];
      final n = toolInput['n'];

      print('Function Call Arguments: $query | n=$n');
      // TODO: implement search

      // Mock results
      final functionResponse = [
        'result1',
        'result2',
      ];

      return 'Results:\n$functionResponse';
    },
  );

  final agent = OpenAIFunctionsAgent.fromLLMAndTools(
    llm: chatOpenAI,
    tools: [tool],
    extraPromptMessages: [
      const MessagesPlaceholder(variableName: BaseMemory.defaultMemoryKey),
    ],
  );
  final memory = ConversationBufferMemory(returnMessages: returnMessages);
  final executor = AgentExecutor(
    agent: agent,
    tools: [tool],
    memory: memory,
  );

  while (true) {
    // Get user input.
    stdout.write('> ');
    final query = stdin.readLineSync() ?? '';
    // Chat Complete by OpenAI.
    final aiMessage = await executor.run(query);
    // Output AI response.
    stdout.writeln(aiMessage);
  }
}

Note that I'm using BaseTool instead of Tool, as your tool expects two input parameters.
Then to add the history, you need to add a prompt that includes {history}, which is the input key where ConversationBufferMemory will add the history.

I've also added a test that verifies this scenario:

test('Test OpenAIFunctionsAgent with string memory', () async {
await testMemory(returnMessages: false);
});
test('Test OpenAIFunctionsAgent with messages memory', () async {
await testMemory(returnMessages: true);

Let me know if that covers your use case 🙂

@orenagiv
Copy link
Contributor Author

orenagiv commented Aug 4, 2023

This is awesome!
Thanks @davidmigloz 🙏
Pulled and got it to work :)

Quick question:
Aren't we're missing the "function-message"?
(by "function message" I'm referring to the response of the function-call that is passed back to the Chat Model).
I mean, when communicating with the OpenAI Function Calls - aren't we supposed to include also the "function message" as part of the list of messages that we're sending back to the OpenAI Chat Model for the next chat-completion?

I've started working on a PR with the memory updates (that you already applied :) and also with something like this:
In history.dart

/// Add a Function response message to the history.
  Future<void> addFunctionChatMessage({
    required final String name,
    required final String content,
  }) {
    return addChatMessage(
      ChatMessage.function(
        name: name,
        content: content,
      ),
    );
  }

And in chat.dart:

@override
  Future<void> saveContext({
    required final MemoryInputValues inputValues,
    required final MemoryOutputValues outputValues,
    // TODO: final MemoryFunctionValues? functionValues,
  }) async {
    // TODO: final (input, output, function) = _getInputOutputValues(inputValues, outputValues, functionValues);
    await chatHistory.addUserChatMessage(input);
    await chatHistory.addAIChatMessage(output);
    // TODO: when relevant, save the function-response message to the history:
    // await chatHistory.addFunctionChatMessage(function);
  }

I'm not yet very familiar with LangChain concepts, so maybe the way LangChain chains & tools work behind the scenes cause my suggestion above to be redundant?

@orenagiv
Copy link
Contributor Author

orenagiv commented Aug 4, 2023

Hey @davidmigloz
Submitted a PR with some related examples, and added the addFunctionChatMessage() method:
#83

@davidmigloz
Copy link
Owner

Indeed the function message is not stored, as when you add memory to an agent executor it will only save the inputs and outputs of the agent executor (whereas the function call is an internal call of the agent). Thanks for the PR! I'll review it now.

@davidmigloz
Copy link
Owner

Hey @orenagiv ,

I've just pushed some improvements in the OpenAIFunctionsAgent that allow it to have internal memory.
It now uses an LLMChain, instead of the LLM directly. So you can easily add memory to that chain.

I've updated the test to use this approach:

final memory = ConversationBufferMemory(returnMessages: returnMessages);
final agent = OpenAIFunctionsAgent.fromLLMAndTools(
llm: llm,
tools: tools,
memory: memory,
);

Now it stores all the messages properly, including the function message:
image

Let me now if that works for you and thanks again for the PR!

@orenagiv
Copy link
Contributor Author

orenagiv commented Aug 5, 2023

Thanks so much @davidmigloz !
I'm checking it now.

I noticed that the function-messages are indeed necessary to be part of the memory (that is then sent back to the Chat Model as the messages-list) because in many cases I want it to "remember" specific properties that existed in previous function-messages, so it can use them in the next tool API.

Say, why did you choose to saveContext with a HumanChatMessage with a content of FunctionChatMessage (instead of a FunctionChatMessage)?
Is it because the history is build with input and output only and you didn't want to have something like: input, function and output?

P.S.
One more thing that is not clear to me with Langchain in general:
Why do we need to define the "tools" in both the Agent and the Executor?

@orenagiv
Copy link
Contributor Author

orenagiv commented Aug 5, 2023

Works like a charm btw @davidmigloz :))

@davidmigloz
Copy link
Owner

Say, why did you choose to saveContext with a HumanChatMessage with a content of FunctionChatMessage (instead of a FunctionChatMessage)?

Indeed, good catch I didn't notice it.
I've just fixed it in this PR: #88

Now all the messages should be saved with the proper type.

Why do we need to define the "tools" in both the Agent and the Executor?

That's a good question.

The tools passed to the Agent allow it to know what actions it can take. The agent needs to know about the tools so that it can decide which one to invoke.

The tools passed to the Executor allow it to actually execute the tools when the Agent decides to invoke one. The Executor needs the actual functions representing the tools so that it can call them.

But you are right, it is kind of redundant, as the Executor can get the tools from the Agent.
I've just refactored it in this PR: #89

Now you just need to pass the tools to the agent.

Thanks for helping to improve the API!
Let me know if you have any other suggestions 🙂

@orenagiv
Copy link
Contributor Author

orenagiv commented Aug 6, 2023

Thanks @davidmigloz !
Great stuff :) works flawlessly 👊

So if I understand correctly, now at the Executor level, I don't need to define tools nor the memory.
I'm using it as follows:

// Init the Agent.
final agent = OpenAIFunctionsAgent.fromLLMAndTools(
  llm: chatOpenAI,
  tools: [...],
  memory: memory,
  systemChatMessage: SystemChatMessagePromptTemplate.fromTemplate('...'),
  extraPromptMessages: [
    const MessagesPlaceholder(variableName: BaseMemory.defaultMemoryKey),
  ],
);

// Init the Langchain Agent Executor.
// No need to define the tools or memory as they are already defined at the Agent level.
final agentExecutor = AgentExecutor(
  agent: agent,
);

@davidmigloz
Copy link
Owner

davidmigloz commented Aug 6, 2023

Correct! now you just need to pass the agent to the AgentExecutor. You don't need the MessagesPlaceholder(variableName: BaseMemory.defaultMemoryKey) either (I'll explain why in your other issue).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c:agents Agents. t:bug Something isn't working
Projects
Status: Done
Development

No branches or pull requests

2 participants