-
Bug DescriptionWhen running a Llama model locally and trying to query a nebulagraph based knowledge graph index the following error is thrown:
I am running nebulagraph inside a local docker container. The llama_index version is 0.8.53.post3. In addition to that I noticed that no data is written to nebulagraph. The space that is given just stays empty. But thats not the problem here ;) Version0.8.53.post3 Steps to ReproduceBelow is the code I use:
Relevant Logs/TracbacksTraceback (most recent call last):
File "C:\nebula-test.py", line 117, in <module>
response = query_engine.query("Tell me about Starlord")
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\llama_index\indices\query\base.py", line 32, in query
return self._query(str_or_query_bundle)
nodes = self.retrieve(query_bundle)
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\llama_index\query_engine\retriever_query_engine.py", line 128, in retrieve
nodes = self._retriever.retrieve(query_bundle)
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\llama_index\indices\base_retriever.py", line 34, in retrieve
return self._retrieve(str_or_query_bundle)
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\llama_index\indices\knowledge_graph\retrievers.py", line 772, in _retrieve
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\llama_index\indices\knowledge_graph\retrievers.py", line 711, in _retrieve_keyword
entities = self._get_entities(query_bundle.query_str)
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\llama_index\indices\knowledge_graph\retrievers.py", line 576, in _get_entities
entities = self._process_entities(
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\llama_index\indices\knowledge_graph\retrievers.py", line 507, in _process_entities
response = self._service_context.llm_predictor.predict(
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\langchain\llms\base.py", line 910, in predict
return self(text, stop=_stop, **kwargs)
File "C:\Users\Ole\anaconda3\envs\aitest\lib\site-packages\langchain\llms\base.py", line 864, in __call__
raise ValueError(
ValueError: Argument `prompt` is expected to be a string. Instead found <class 'llama_index.prompts.base.PromptTemplate'>. If you want to run the LLM on multiple prompts, use `generate` instead. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
🤖 Hello @derOlek, I'm here to assist you with your issue. I'm an AI bot designed to help solve bugs, answer questions, and guide you in becoming a contributor while you're waiting for a human maintainer. Based on the error message you provided, it seems that the In your code, you're using the response = query_engine.query("Tell me about Starlord") #CRASHES HERE The There was a similar issue in the LlamaIndex repository: [Documentation]: custom LLM PromptTemplate no longer needed. This issue was resolved by a pull request: support both str and templates for query_wrapper_prompt in HF LLMs. The solution was to support both strings and In your case, you might want to ensure that you're passing a string to the I hope this helps! If you have any more questions or need further clarification, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Make sure you use the |
Beta Was this translation helpful? Give feedback.
Make sure you use the
LlamaCPP
andHuggingFaceEmbedding
from llama-index, and also passLlamaCpp
asllm
instead ofllm_predictor
into the service context.