In https://github.com/codelion/openevolve/blob/main/openevolve/llm/openai.py#L41 you could switch out the OpenAI package with AsyncOpenAI, and then you do not have to call run_in_executor in the call_api function. You can just await the chat completion directly. I think this should also prevent some deadlocking in edge cases with asyncio.
(Note that I didn't test this, so take this suggestion with a grain of salt)