New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix AsyncOpenAI "RuntimeError: Event loop is closed bug" when instances of AsyncOpenAI are rapidly created & destroyed #12946
Fix AsyncOpenAI "RuntimeError: Event loop is closed bug" when instances of AsyncOpenAI are rapidly created & destroyed #12946
Conversation
@philipchung This seems to have caused more issues when I run your branch? from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
llm = OpenAI()
async def arun():
await llm.acomplete("Hello World!")
gen = await llm.astream_complete("Hello world!")
async for token in gen:
pass
await llm.achat([ChatMessage(role="user", content="Hello World!")])
gen = await llm.astream_chat([ChatMessage(role="user", content="Hello World!")])
async for token in gen:
pass
def run():
llm.complete("Hello World!")
gen = llm.stream_complete("Hello world!")
for token in gen:
pass
llm.chat([ChatMessage(role="user", content="Hello World!")])
gen = llm.stream_chat([ChatMessage(role="user", content="Hello World!")])
for token in gen:
pass
async def run_many():
tasks = [arun(), arun(), arun()]
await asyncio.gather(*tasks)
# test async
import asyncio
tasks = [arun(), arun(), arun()]
asyncio.run(run_many())
# test sync
run() I get this error
If I install an older version of the openai package, it works fine though |
I know using a global LLM is probably not advisable, but I know many users who do this |
…ntext closes client's connection pool; do not use client context in streaming methods
…nt's connection pool will be closed; context manager not used for streaming methods
…pchung/llama_index into FixAsyncOpenAIRuntimeError
I believe I resolved the issues, and running your test code works. Also passes pytest. Changes are:
|
Yea streaming is kind of hard in this case -- lgtm otherwise now |
…es of AsyncOpenAI are rapidly created & destroyed (run-llama#12946)
…es of AsyncOpenAI are rapidly created & destroyed (run-llama#12946)
… avoid openai/openai-python#1262
Description
When
AsyncOpenAI
classes are rapidly created and destroyed, the underlyinghttpx.AsyncClient
is not properly opened and closed unless requests are made from a context manager.AsyncOpenAI
clients underlie LlamaIndexOpenAI
andOpenAI-Like
classes, so if we rapidly create and destroy these (e.g. in multithread/multiprocess context) this error will arise. This PR just wraps thecompletions.create()
calls within a client context manager and ensures that the underlyinghttpx.AsyncClient
used by the OpenAI python client is properly opened and closed.For completeness,
completion.create()
requests from the Sync OpenAI client are also wrapped in a client context manager. This PR also aligns the_achat()
method to return logprobs similar to_chat()
method.Fixes issues raised in openai/openai-python#1262, openai/openai-python#1254
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Suggested Checklist:
make format; make lint
to appease the lint gods