-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Failed to get vector store from OpenSearch within an async endpoint with v0.10.16 #11661
Comments
Hey @mw19930312! Great to see you back here, navigating the twists and turns of LlamaIndex with us. Let's dive into this new challenge together. The error you're encountering, "RuntimeError: this event loop is already running", typically happens when you're trying to run an asyncio event loop within an already running event loop in the same thread. This can occur when using In your case, the error is likely caused by the use of To resolve this issue, you should refactor your code to avoid starting a new event loop inside an already running one. One approach is to directly use the asynchronous versions of these methods ( Additionally, the use of You can find more details about this in the source code of the As for the async def get_vector_store(self) -> "OpensearchVectorStore":
"""
Get the vector store.
Returns:
OpensearchVectorStore: The vector store instance.
"""
if self._vector_store is None:
self._vector_store = OpensearchVectorStore(client=self)
return self._vector_store This method is asynchronous, which means it should be awaited when called within an async function. If you're calling this method from a synchronous context, you'll need to ensure that it's run within an event loop. However, as mentioned above, care should be taken to avoid starting a new event loop if one is already running. I hope this helps! If you have any further questions or need more clarification, feel free to ask. Sources
|
@mw19930312 try setting the loop type to asyncio i.e. I would do Then adding nest_asyncio
|
@logan-markewich Thanks for the reply! Sorry I'm not fully following. Where should I add these lines? I'm putting them into the python file where the async endpoint was called in the gateway. However, the gateway no longer compiles.
|
@logan-markewich My endpoint in gateway looks like:
where |
@logan-markewich Can we have more insights on this issue? This is a blocker for us at this time to leverage all new features introduced after v0.10. |
@mw19930312 you are running in fastapi right? You need to set the loop type to asyncio |
If you are using fastapi, how are you launching it? You can add the loop type on the CLI with or from code with |
Hey @logan-markewich I see that the async opensearch support was added in I see that in v0.10.16 this issue is expected to be fixed. But for future purposes, how we track the version dependency of integrations vs the core package? |
Each integration package should (in theory) work with any version of core, unless it specifically requires a version of core In a production app, I would be locking all my versions and only upgrading when I need to, at my own pace |
@logan-markewich I added the
The error is
|
@logan-markewich Thanks for all the replies! Just an FYI, we got this resolved by setting |
Bug Description
I'm migrating llama_index to v0.10.16. However, I cannot get a vector store from Opensearch within an endpoint that is triggered asynchronously. I provided the code below, where the function is called in an async endpoint.
Version
v0.10.16
Steps to Reproduce
@sentry_sdk.trace
def get_unstructure_data_retrieval_engine(system_prompt: str, config: TextSearchQueryEngineConfig) -> BaseQueryEngine:
vector_store = opensearch_client.get_vector_store(config.index_name)
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: