Skip to content

consistent seed not generating reproducible responses #850

@surcyf123

Description

@surcyf123

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

When I call gpt-4-turbo streaming with a constant seed (1234), There is still variations in the output, espeically as the prompt and answer get longer. Is there any way to avoid this or somehow force it to be even more deterministic?

To Reproduce

code below, problem occurs specifically with in-debth thought provoking prompts.

Code snippets

client = AsyncOpenAI(timeout=30)

async def send_openai_request(prompt, engine="gpt-4-1106-preview"):
    try:
        stream = await client.chat.completions.create(
            messages=[{"role": "user", "content": prompt}],
            stream=True,
            model=engine,
            seed=1234,
            temperature=0.0001,
        )
        collected_messages = []

        async for part in stream:
            print(part.choices[0].delta.content or "")
            collected_messages.append(part.choices[0].delta.content or "")

        all_messages = ''.join(collected_messages)
        return all_messages


### OS

macOS

### Python version

Python 3.10.12

### Library version

openai 1.3.3

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions