Skip to content

Continue generating on stop_reason: length #544

@34r7h

Description

@34r7h

Describe the feature or improvement you're requesting

Is there a way to continue generating without backfeeding the entire message everytime?

Additional context

Atm my code is doing this which I feel is very wasteful

import sys
import json
import openai
from dotenv import load_dotenv
import os

def recursive_chat(messages):
    chat_completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages,
        max_tokens=140
    )
    response_message = chat_completion['choices'][0]['message']
    print(response_message)
    messages[0]['content'] += ' ' + response_message['content']  # append the assistant's message to the user's message

    if chat_completion['choices'][0]['finish_reason'] == 'stop':
        return messages
    else:
        return recursive_chat(messages)

# load .env file
load_dotenv()
print(sys.argv)
prompt = sys.argv[1] if len(sys.argv) > 1 else "2+2"
key = sys.argv[2] if len(sys.argv) > 2 else os.getenv('OPENAI_API_KEY')
# Load your API key from an environment variable or secret management service
openai.api_key = key

messages = [{"role": "user", "content": "how could openai fail?"}]

final_messages = recursive_chat(messages)
print(json.dumps(final_messages))


Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions