Skip to content

Suggestion: Use AsyncOpenAI instead of the default OpenAI package #278

@theahura

Description

@theahura

In https://github.com/codelion/openevolve/blob/main/openevolve/llm/openai.py#L41 you could switch out the OpenAI package with AsyncOpenAI, and then you do not have to call run_in_executor in the call_api function. You can just await the chat completion directly. I think this should also prevent some deadlocking in edge cases with asyncio.

(Note that I didn't test this, so take this suggestion with a grain of salt)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions