Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: TypeError: object HeadApiResponse can't be used in 'await' expression #10548

Closed
Vybhav448 opened this issue Feb 9, 2024 · 4 comments · Fixed by #10549
Closed

[Bug]: TypeError: object HeadApiResponse can't be used in 'await' expression #10548

Vybhav448 opened this issue Feb 9, 2024 · 4 comments · Fixed by #10549
Labels
bug Something isn't working

Comments

@Vybhav448
Copy link

Bug Description

I was able to run Elastic search's server using cloud but getting this issue when trying to run locally.

TypeError: object HeadApiResponse can't be used in 'await' expression

Versions:
llama-index==0.9.46
elasticsearch==8.8.0

Version

0.9.46

Steps to Reproduce

Trying to create embeddings using VectorStoreIndex.from_documents method.

Relevant Logs/Tracbacks

TypeError                                 Traceback (most recent call last)
<ipython-input-19-4b901dfb7bfa> in <cell line: 1>()----> 1 index = VectorStoreIndex.from_documents(      2     documents, storage_context=storage_context)

/usr/local/lib/python3.10/dist-packages/llama_index/vector_stores/elasticsearch.py in _create_index_if_not_exists(self, index_name, dims_length)    244             dims_length: Length of the embedding vectors.    245         """--> 246         if await self.client.indices.exists(index=index_name):    247             logger.debug(f"Index {index_name} already exists. Skipping creation.")    248 
TypeError: object HeadApiResponse can't be used in 'await' expression
@Vybhav448 Vybhav448 added bug Something isn't working triage Issue needs to be triaged/prioritized labels Feb 9, 2024
Copy link

dosubot bot commented Feb 9, 2024

🤖

Hey there @Vybhav448! Great to see you back. Hope you’re ready to debug another adventure with us. 😊

It seems like you're encountering an issue with the await keyword in the self.client.indices.exists(index=index_name) method. This method is part of the Elasticsearch Python client and is used to check if a specific Elasticsearch index already exists. The await keyword is used here because this operation is asynchronous, meaning it returns a coroutine that needs to be awaited to get the result.

The error message you're seeing, TypeError: object HeadApiResponse can't be used in 'await' expression, suggests that the exists method is not returning a coroutine, but an instance of HeadApiResponse. This is unusual because the exists method should return a coroutine when used with the await keyword.

One possible reason for this issue could be a mismatch between the versions of the Elasticsearch Python client (elasticsearch) and the LlamaIndex library (llama-index). You mentioned that you're using elasticsearch==8.8.0 and llama-index==0.9.46. It's possible that these versions are not fully compatible with each other.

Here are a few steps you can take to troubleshoot this issue:

  1. Check the Elasticsearch Python client version compatibility: The LlamaIndex library might not be compatible with the version of the Elasticsearch Python client you're using. You can check the LlamaIndex documentation or the Elasticsearch Python client documentation to see if there are any known compatibility issues between these versions.

  2. Update the LlamaIndex library: If there's a newer version of the LlamaIndex library available, try updating to that version. You can do this by running pip install --upgrade llama-index in your terminal.

  3. Try a different version of the Elasticsearch Python client: If updating the LlamaIndex library doesn't resolve the issue, you might want to try using a different version of the Elasticsearch Python client. You can do this by running pip install elasticsearch==<version> in your terminal, replacing <version> with the version number you want to install.

If none of these steps resolve the issue, it would be helpful if you could provide more information about your environment, such as the Python version you're using and any other libraries or frameworks that might be interacting with the LlamaIndex library or the Elasticsearch Python client.

I hope this helps! If you have any other questions or need further clarification, please don't hesitate to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@sreenivasanm6
Copy link

I am also facing similar issue and the same code is working when we try to connect with elasticsearch server running in cloud but problem arises only when we do locally. Let us know if there are any issues or provide a sample code snippet which runs locally with vectorstoreIndex

@logan-markewich
Copy link
Collaborator

Looks like await is being called on a function that doesn't actually need it

@mpieck
Copy link

mpieck commented Apr 21, 2024

This is not a bug, use async elasticsearch client in llama-index code:

$ pip install elasticsearch[async]

then in llama-index code:

from elasticsearch import AsyncElasticsearch

client = AsyncElasticsearch(
"https://localhost:9200/", )

vector_store = ElasticsearchStore(
index_name="your_index", es_client=client
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants