Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuing conversation with context from AIChat output_schema #79

Open
rasmi opened this issue Sep 5, 2023 · 2 comments
Open

Continuing conversation with context from AIChat output_schema #79

rasmi opened this issue Sep 5, 2023 · 2 comments

Comments

@rasmi
Copy link

rasmi commented Sep 5, 2023

In OpenAI's official docs on function calling, they give a pattern of:

  1. Call the model
  2. Get function parameters as structured data
  3. Call function
  4. Call the model again, this time including context from the function call, to get a natural language response.

With simpleaichat's AIChat class, I am doing something that looks like:

  1. Call the model
  2. Get structured data out using output_schema

At this point, I would like to call the model again as in Step 4 above, except with the structured data from AIChat's output_schema as context, in order to produce a natural language response.

So:

  1. Call the model
  2. Get structured data out using output_schema
  3. ???
  4. Call the model again, this time including structured data from Step 2 as context, to get a natural language response.

Is the preferred/recommended way to do this to using the AIChat class? Ideally I could do this in the very same conversation/instance rather than create a new conversation/instance to handle context.

In the OpenAI example, they do the following:

        # Step 4: send the info on the function call and function response to GPT
        messages.append(response_message)  # extend conversation with assistant's reply
        messages.append(
            {
                "role": "function",
                "name": function_name,
                "content": function_response,
            }
        )  # extend conversation with function response
        second_response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo-0613",
            messages=messages,
        )  # get a new response from GPT where it can see the function response
        return second_response

Is it recommended to (for example) send a message to the model using the function role, even if no such functions were defined? In this case, I'm effectively using the AIChat instance itself with output_schema as a function. I would just like to get a natural language response in addition to the structured output_schema response.

@rasmi rasmi changed the title Continuing conversation with context from function output_schema Continuing conversation with context from AIChat output_schema Sep 5, 2023
@pyrotank41
Copy link

pyrotank41 commented Sep 7, 2023

how abot you use AIChat without the output_schema as the input to the function call?

@rasmi
Copy link
Author

rasmi commented Sep 10, 2023

I think the underlying goal here is to have one chat session that exclusively consists of natural language inputs and outputs in its history, but still be able produce structured input/output "under the hood" for any given message.

As an example:

  • System: You are a fruit vendor selling fruits to passerby on the street. As people order additional fruits, keep track of the fruits and their total cost.
  • User: Hi! I would love some watermelons. How much do they cost?
  • Assistant: <produces structured output of Fruit which is then used to look up prices>
  • Assistant: Welcome! These watermelons are $5 each.
  • User: Great, I'd like three of them.
  • Assistant: <updates Cart object with fruit, quantity, total cost>
  • Assistant: Great. Is that all?
  • User: Yes.
  • Assistant: Your total is $15.

This is reminiscent of what gen_with_tools does, except rather than have the library manage the use of tools, I would like it to run a specific set of functions every time and return structured context outputs for each user message, then unstructured Assistant outputs using the structured context from the functions (like a "hook").

The approached used in gen_with_tools is to make one call to extract the structured output (and call the tool), then update the system prompt to force use of the new context, then make a second call that includes the context alongside the original prompt to produce a response (with save_messages=False). It seems this is a valid approach -- just adding "Context: <context> , User: <original prompt>" as the user prompt in a second call to the model. I suppose I could do this manually as in gen_with_tools, but having a simpler way to do this directly in AIChat would be helpful.

Maybe something like this would suffice for now (non-functional code, just sketching it out):

assistant = simpleaichat.AIChat(...)
prompt = <user input>
# Initial model call to get structured output/context
context = assistant(prompt, output_schema=..., save_messages=False)
# <call other functions to act on context>

# Create natural language response including context
prompt_with_context = f"Context: {context}\n\nUser: {prompt}"
response = assistant(prompt_with_context, save_messages=False)

# Save original message and final response
user_message = ChatMessage(role="user", content=prompt)
assistant_message = ChatMessage(role="assistant", content=response)
assistant.get_session().add_messages(user_message, assistant_message)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants